Feb 16 21:37:46 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 21:37:46 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:46 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 21:37:47 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 21:37:47 crc kubenswrapper[4792]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.765553 4792 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770396 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770425 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770434 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770443 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770451 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770460 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770468 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770478 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770488 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770498 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770506 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770514 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770521 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770545 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770553 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770560 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770568 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770576 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770584 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770592 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770599 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770607 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770623 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770662 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770670 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770678 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770686 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770697 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770706 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770715 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770724 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770734 4792 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770742 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770750 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770759 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770768 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770777 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770785 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770793 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770802 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770809 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770817 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770824 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770833 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770841 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770850 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770858 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770865 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770873 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770883 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770894 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770903 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770912 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770920 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770928 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770937 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770947 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770955 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770963 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770971 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770979 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770987 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.770996 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771004 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771011 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771019 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771027 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771036 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771044 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771052 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.771060 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.772936 4792 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.772959 4792 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.772975 4792 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.772986 4792 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.772998 4792 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773007 4792 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773019 4792 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773030 4792 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773039 4792 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773049 4792 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773059 4792 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773068 4792 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773077 4792 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773086 4792 flags.go:64] FLAG: --cgroup-root="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773095 4792 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773105 4792 flags.go:64] FLAG: --client-ca-file="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773113 4792 flags.go:64] FLAG: --cloud-config="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773122 4792 flags.go:64] FLAG: --cloud-provider="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773131 4792 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773141 4792 flags.go:64] FLAG: --cluster-domain="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773150 4792 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773159 4792 flags.go:64] FLAG: --config-dir="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773168 4792 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773178 4792 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773191 4792 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773200 4792 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773210 4792 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773219 4792 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773228 4792 flags.go:64] FLAG: --contention-profiling="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773237 4792 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773245 4792 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773255 4792 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773265 4792 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773276 4792 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773285 4792 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773294 4792 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773303 4792 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773312 4792 flags.go:64] FLAG: --enable-server="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773320 4792 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773331 4792 flags.go:64] FLAG: --event-burst="100" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773341 4792 flags.go:64] FLAG: --event-qps="50" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773350 4792 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773359 4792 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773368 4792 flags.go:64] FLAG: --eviction-hard="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773378 4792 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773387 4792 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773396 4792 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773405 4792 flags.go:64] FLAG: --eviction-soft="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773414 4792 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773423 4792 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773432 4792 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773441 4792 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773450 4792 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773458 4792 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773467 4792 flags.go:64] FLAG: --feature-gates="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773479 4792 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773489 4792 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773500 4792 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773510 4792 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773519 4792 flags.go:64] FLAG: --healthz-port="10248" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773528 4792 flags.go:64] FLAG: --help="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773537 4792 flags.go:64] FLAG: --hostname-override="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773546 4792 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773555 4792 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773563 4792 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773572 4792 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773581 4792 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773590 4792 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773608 4792 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773617 4792 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773626 4792 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773657 4792 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773667 4792 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773676 4792 flags.go:64] FLAG: --kube-reserved="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773685 4792 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773694 4792 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773703 4792 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773711 4792 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773720 4792 flags.go:64] FLAG: --lock-file="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773729 4792 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773738 4792 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773747 4792 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773760 4792 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773770 4792 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773780 4792 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773788 4792 flags.go:64] FLAG: --logging-format="text" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773797 4792 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773807 4792 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773818 4792 flags.go:64] FLAG: --manifest-url="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773827 4792 flags.go:64] FLAG: --manifest-url-header="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773838 4792 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773847 4792 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773858 4792 flags.go:64] FLAG: --max-pods="110" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773867 4792 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773876 4792 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773885 4792 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773894 4792 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773903 4792 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773912 4792 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773922 4792 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773941 4792 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773950 4792 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773958 4792 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773967 4792 flags.go:64] FLAG: --pod-cidr="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773977 4792 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773991 4792 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.773999 4792 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774008 4792 flags.go:64] FLAG: --pods-per-core="0" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774017 4792 flags.go:64] FLAG: --port="10250" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774027 4792 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774036 4792 flags.go:64] FLAG: --provider-id="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774045 4792 flags.go:64] FLAG: --qos-reserved="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774054 4792 flags.go:64] FLAG: --read-only-port="10255" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774063 4792 flags.go:64] FLAG: --register-node="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774072 4792 flags.go:64] FLAG: --register-schedulable="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774081 4792 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774096 4792 flags.go:64] FLAG: --registry-burst="10" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774105 4792 flags.go:64] FLAG: --registry-qps="5" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774113 4792 flags.go:64] FLAG: --reserved-cpus="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774122 4792 flags.go:64] FLAG: --reserved-memory="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774133 4792 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774143 4792 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774152 4792 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774160 4792 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774169 4792 flags.go:64] FLAG: --runonce="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774178 4792 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774187 4792 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774196 4792 flags.go:64] FLAG: --seccomp-default="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774205 4792 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774214 4792 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774223 4792 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774232 4792 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774241 4792 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774250 4792 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774259 4792 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774269 4792 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774277 4792 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774287 4792 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774296 4792 flags.go:64] FLAG: --system-cgroups="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774304 4792 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774319 4792 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774328 4792 flags.go:64] FLAG: --tls-cert-file="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774337 4792 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774347 4792 flags.go:64] FLAG: --tls-min-version="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774355 4792 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774365 4792 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774373 4792 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774382 4792 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774391 4792 flags.go:64] FLAG: --v="2" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774402 4792 flags.go:64] FLAG: --version="false" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774413 4792 flags.go:64] FLAG: --vmodule="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774423 4792 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.774433 4792 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774666 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774677 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774686 4792 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774696 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774704 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774712 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774720 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774728 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774736 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774744 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774754 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774765 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774774 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774783 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774792 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774801 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774810 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774819 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774827 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774835 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774844 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774852 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774862 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774873 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774883 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774891 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774899 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774913 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774921 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774931 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774939 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774947 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774955 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774963 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774973 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774984 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.774994 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775003 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775011 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775020 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775029 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775038 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775046 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775054 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775062 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775070 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775078 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775085 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775093 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775101 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775109 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775116 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775124 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775132 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775140 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775148 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775156 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775164 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775171 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775196 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775204 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775213 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775220 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775228 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775239 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775246 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775254 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775262 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775270 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775278 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.775285 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.775307 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.789402 4792 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.789452 4792 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789655 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789675 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789685 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789694 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789702 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789710 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789718 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789726 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789734 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789741 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789749 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789757 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789768 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789778 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789787 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789795 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789804 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789811 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789820 4792 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789831 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789842 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789852 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789860 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789869 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789877 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789885 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789893 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789901 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789911 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789921 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789930 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789938 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789946 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789955 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789975 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789983 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789991 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.789999 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790006 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790014 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790022 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790032 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790041 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790050 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790060 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790069 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790077 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790085 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790094 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790121 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790129 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790136 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790145 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790153 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790161 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790168 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790175 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790183 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790191 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790198 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790206 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790215 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790223 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790230 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790238 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790246 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790253 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790261 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790268 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790276 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790296 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.790309 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790594 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790616 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790625 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790659 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790667 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790675 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790683 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790691 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790699 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790708 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790716 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790724 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790731 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790740 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790747 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790755 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790763 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790770 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790778 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790786 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790794 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790801 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790809 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790817 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790825 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790833 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790840 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790848 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790855 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790864 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790871 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790881 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790891 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790899 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790909 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790916 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790924 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790931 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790939 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790947 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790954 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790962 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790970 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790977 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790985 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.790994 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791002 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791009 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791017 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791024 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791032 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791040 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791048 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791055 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791062 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791071 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791078 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791086 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791094 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791101 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791109 4792 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791117 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791128 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791138 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791148 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791158 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791168 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791176 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791184 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791194 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.791204 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.791217 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.791465 4792 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.797719 4792 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.798124 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.800575 4792 server.go:997] "Starting client certificate rotation" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.800687 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.800955 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 02:29:39.000018087 +0000 UTC Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.801105 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.828458 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.833575 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.834480 4792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.849450 4792 log.go:25] "Validated CRI v1 runtime API" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.886435 4792 log.go:25] "Validated CRI v1 image API" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.888891 4792 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.894071 4792 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-21-33-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.894111 4792 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.912885 4792 manager.go:217] Machine: {Timestamp:2026-02-16 21:37:47.909813069 +0000 UTC m=+0.563091990 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7cf4d510-eeff-4323-b01d-9568b7e39914 BootID:1f4590c4-5339-4c82-a413-234d08dabd4a Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:19:8a:4a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:19:8a:4a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a1:e6:05 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f0:e6:d9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ad:87:a2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:88:d2:fd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:f7:f3:d3:ef:c1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:62:1f:bb:d6:60:35 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.913136 4792 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.913278 4792 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.915362 4792 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.915570 4792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.915606 4792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.915951 4792 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.915964 4792 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.916525 4792 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.916561 4792 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.916779 4792 state_mem.go:36] "Initialized new in-memory state store" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.916879 4792 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.922713 4792 kubelet.go:418] "Attempting to sync node with API server" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.922743 4792 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.922799 4792 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.922995 4792 kubelet.go:324] "Adding apiserver pod source" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.923030 4792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.928470 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.929420 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.929903 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.930271 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.932394 4792 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.933579 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.935184 4792 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936677 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936703 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936712 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936720 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936735 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936743 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936751 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936764 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936774 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936794 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936806 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.936815 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.937808 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.938281 4792 server.go:1280] "Started kubelet" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.939295 4792 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.939311 4792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 21:37:47 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.940314 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.941470 4792 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.943086 4792 server.go:460] "Adding debug handlers to kubelet server" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.943890 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.943951 4792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.944063 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:53:31.529014447 +0000 UTC Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.944379 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.944437 4792 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.944445 4792 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.944523 4792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 21:37:47 crc kubenswrapper[4792]: W0216 21:37:47.945789 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.945966 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.946069 4792 factory.go:55] Registering systemd factory Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.948212 4792 factory.go:221] Registration of the systemd container factory successfully Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.949046 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.949157 4792 factory.go:153] Registering CRI-O factory Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.949185 4792 factory.go:221] Registration of the crio container factory successfully Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.949301 4792 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.949340 4792 factory.go:103] Registering Raw factory Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.949369 4792 manager.go:1196] Started watching for new ooms in manager Feb 16 21:37:47 crc kubenswrapper[4792]: E0216 21:37:47.949292 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894d7d614f88107 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:37:47.938251015 +0000 UTC m=+0.591529906,LastTimestamp:2026-02-16 21:37:47.938251015 +0000 UTC m=+0.591529906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.950620 4792 manager.go:319] Starting recovery of all containers Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963078 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963223 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963253 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963285 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963313 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963343 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963373 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963419 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963454 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963482 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963509 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963542 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963570 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963615 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963678 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963706 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963736 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963764 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963789 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963827 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963852 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.963937 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964006 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964035 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964060 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964086 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964119 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964156 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964190 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964285 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964316 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964344 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964370 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964395 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964421 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964448 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964473 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964501 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964531 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964556 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964583 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964617 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964680 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964712 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964739 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964768 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964792 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964817 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964852 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964878 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964904 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964930 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964966 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.964996 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.965027 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.965056 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.965085 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.965115 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.969690 4792 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.969895 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.969949 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.969973 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.969994 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970016 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970073 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970092 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970131 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970167 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970203 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970241 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970307 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970363 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970386 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970410 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970432 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970455 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970480 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970501 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970523 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970549 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970577 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970648 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970720 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970754 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970784 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970887 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970937 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.970979 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971017 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971056 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971085 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971110 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971131 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971198 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971259 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971296 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971349 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971370 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971392 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971414 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971473 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971525 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971670 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971705 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971750 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971818 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971862 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971919 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971958 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.971983 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973185 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973232 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973266 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973296 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973329 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973356 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973384 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973411 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973440 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973490 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973518 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973546 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973575 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973673 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973704 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973732 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973757 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973785 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973814 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973841 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973871 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973901 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973933 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973960 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.973987 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974015 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974047 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974079 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974112 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974145 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974173 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974202 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974229 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974291 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974325 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974356 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974422 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974444 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974464 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974484 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974505 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974529 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974551 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974573 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974604 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974656 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974685 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974712 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974733 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974754 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974775 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974796 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974816 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974840 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974864 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974883 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974907 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974927 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974947 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974968 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.974992 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975013 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975033 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975056 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975076 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975097 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975117 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975138 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975159 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975180 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975200 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975220 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975241 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975260 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975280 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975301 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975321 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975343 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975363 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975383 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975405 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975426 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975447 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975468 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975491 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975513 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975534 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975555 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975575 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975603 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975653 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975683 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975710 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975740 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975769 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975791 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975812 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975833 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975853 4792 reconstruct.go:97] "Volume reconstruction finished" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.975869 4792 reconciler.go:26] "Reconciler: start to sync state" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.980732 4792 manager.go:324] Recovery completed Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.990403 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.992761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.992815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.992828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.993650 4792 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.993673 4792 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 21:37:47 crc kubenswrapper[4792]: I0216 21:37:47.993694 4792 state_mem.go:36] "Initialized new in-memory state store" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.005662 4792 policy_none.go:49] "None policy: Start" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.008302 4792 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.008343 4792 state_mem.go:35] "Initializing new in-memory state store" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.021792 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.024769 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.024856 4792 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.024901 4792 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.024989 4792 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 21:37:48 crc kubenswrapper[4792]: W0216 21:37:48.025354 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.025415 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.044458 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.070860 4792 manager.go:334] "Starting Device Plugin manager" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.071115 4792 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.071132 4792 server.go:79] "Starting device plugin registration server" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.071592 4792 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.071618 4792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.072075 4792 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.072208 4792 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.072216 4792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.079127 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.126160 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.126264 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.127682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.127743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.127752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.127887 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128187 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128264 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.128830 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.129076 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.129151 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.129377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.129405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.129418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.130487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.130708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.130734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.130925 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.131435 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.131467 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132840 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.132905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.133429 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.134102 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.134204 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.134996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135251 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135302 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.135990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.136017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.136029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.149321 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.172318 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.173551 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.173720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.173819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.173906 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.174723 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.177863 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.177958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178163 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178463 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178721 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178767 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178805 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.178826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.279936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280544 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280828 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280924 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280973 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281028 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280988 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281096 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281052 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.280997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281748 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281846 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281982 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.282115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281709 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.281375 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.282318 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.282564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.282702 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.282776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.375544 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.377407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.377475 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.377500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.377545 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.378287 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.464671 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.486178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.502198 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: W0216 21:37:48.509948 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e9fe637346a907eda7c551fa1079ca73cf551b0e20a172bbd5eec67baf9622f1 WatchSource:0}: Error finding container e9fe637346a907eda7c551fa1079ca73cf551b0e20a172bbd5eec67baf9622f1: Status 404 returned error can't find the container with id e9fe637346a907eda7c551fa1079ca73cf551b0e20a172bbd5eec67baf9622f1 Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.516924 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.525965 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:48 crc kubenswrapper[4792]: W0216 21:37:48.526781 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-13097943fce19798866e4a0ee7d949c6de25a61c458d730fe937192b040ea0f4 WatchSource:0}: Error finding container 13097943fce19798866e4a0ee7d949c6de25a61c458d730fe937192b040ea0f4: Status 404 returned error can't find the container with id 13097943fce19798866e4a0ee7d949c6de25a61c458d730fe937192b040ea0f4 Feb 16 21:37:48 crc kubenswrapper[4792]: W0216 21:37:48.527496 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-87b4dc8df39e2e03ee2b795b68b66e0c80849c72b0ca9ce4a125b682a12e065e WatchSource:0}: Error finding container 87b4dc8df39e2e03ee2b795b68b66e0c80849c72b0ca9ce4a125b682a12e065e: Status 404 returned error can't find the container with id 87b4dc8df39e2e03ee2b795b68b66e0c80849c72b0ca9ce4a125b682a12e065e Feb 16 21:37:48 crc kubenswrapper[4792]: W0216 21:37:48.536757 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2b5dfd06fbb0ad0b623633c6751e29365d7ccf77fce29530458262ad4a7618f8 WatchSource:0}: Error finding container 2b5dfd06fbb0ad0b623633c6751e29365d7ccf77fce29530458262ad4a7618f8: Status 404 returned error can't find the container with id 2b5dfd06fbb0ad0b623633c6751e29365d7ccf77fce29530458262ad4a7618f8 Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.550907 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.778562 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.779795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.779826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.779837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.779865 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: E0216 21:37:48.780407 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.941258 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:48 crc kubenswrapper[4792]: I0216 21:37:48.944412 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:17:38.574590981 +0000 UTC Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.031812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65c1bf6f0f56f382cd528aec7f3e5b3c09934ec0e20380a7bfaf086dff67612c"} Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.032682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b5dfd06fbb0ad0b623633c6751e29365d7ccf77fce29530458262ad4a7618f8"} Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.033989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87b4dc8df39e2e03ee2b795b68b66e0c80849c72b0ca9ce4a125b682a12e065e"} Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.034979 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"13097943fce19798866e4a0ee7d949c6de25a61c458d730fe937192b040ea0f4"} Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.035815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9fe637346a907eda7c551fa1079ca73cf551b0e20a172bbd5eec67baf9622f1"} Feb 16 21:37:49 crc kubenswrapper[4792]: W0216 21:37:49.284684 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.284805 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:49 crc kubenswrapper[4792]: W0216 21:37:49.312253 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.312380 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:49 crc kubenswrapper[4792]: W0216 21:37:49.319678 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.319775 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.352831 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Feb 16 21:37:49 crc kubenswrapper[4792]: W0216 21:37:49.367247 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.367376 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.581516 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.583335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.583381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.583399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.583433 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.584178 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.847154 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 21:37:49 crc kubenswrapper[4792]: E0216 21:37:49.848155 4792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.941674 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:49 crc kubenswrapper[4792]: I0216 21:37:49.944722 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:35:20.148063254 +0000 UTC Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.039949 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0fb1d2595dbdef65a582889d66375ee3123fac00deb54fe06c94be173bf7ea6b" exitCode=0 Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.040022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0fb1d2595dbdef65a582889d66375ee3123fac00deb54fe06c94be173bf7ea6b"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.040520 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.041845 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca" exitCode=0 Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.041910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.041985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.042035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.042052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.042455 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.043800 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2514fbab3e3e8134bb225f703f902cd69818c335bc1563ce5db1a3506b4b6765" exitCode=0 Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.043890 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2514fbab3e3e8134bb225f703f902cd69818c335bc1563ce5db1a3506b4b6765"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.043908 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.045280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.047067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.047121 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.047141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.047157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.047170 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.048422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.048452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.048463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.050223 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8" exitCode=0 Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.050302 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8"} Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.050459 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.052271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.052354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.052381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.056688 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.058691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.058742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.058754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.941741 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:50 crc kubenswrapper[4792]: I0216 21:37:50.945281 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:58:55.681968746 +0000 UTC Feb 16 21:37:50 crc kubenswrapper[4792]: E0216 21:37:50.954516 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.055173 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2339925a0bd14050bedd2f7bed99705b97217e702a55d0449b0f789b44fdab31"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.055269 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.056743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.056788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.056801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.058699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.058731 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.058743 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.058753 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.058761 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.060222 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="baa39ee8868e8e8e331dc51134f93be4a5e5e8da53f91d55fac40e7bcc005e8a" exitCode=0 Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.060264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"baa39ee8868e8e8e331dc51134f93be4a5e5e8da53f91d55fac40e7bcc005e8a"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.060369 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.061234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.061257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.061266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.064495 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.064931 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065212 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df"} Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.065610 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.066061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.066079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.066086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:51 crc kubenswrapper[4792]: W0216 21:37:51.155200 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Feb 16 21:37:51 crc kubenswrapper[4792]: E0216 21:37:51.155272 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.184815 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.185884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.185921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.185931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.185954 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:51 crc kubenswrapper[4792]: E0216 21:37:51.186347 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Feb 16 21:37:51 crc kubenswrapper[4792]: I0216 21:37:51.945384 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:00:49.265901501 +0000 UTC Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073742 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4b732738052a7489309624f117563b400efea4fccbed73cc590532d54a7f8df0" exitCode=0 Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073811 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4b732738052a7489309624f117563b400efea4fccbed73cc590532d54a7f8df0"} Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073885 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073905 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073921 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.073955 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.074021 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.075912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076440 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.076533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:52 crc kubenswrapper[4792]: I0216 21:37:52.946936 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:37:57.257132568 +0000 UTC Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.078692 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.078600 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88c6dde40b3c535993a2058ce48081cab7e9174dd411f0c7182404224518da95"} Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079078 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079103 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cd5d94bb72a99095a80465b3eb73c088ccdfed1ee388af0e844fb87153e2e55"} Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079120 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"681bcd07cbde8fd7fe97e21acd507267258c43753d68be570d3eb3f6793d8475"} Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b03155cbc0f9a67f42e3a420d4314c90242d64de20d345afaeb59c7f9456ca64"} Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.079388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:53 crc kubenswrapper[4792]: I0216 21:37:53.947794 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:01:40.680161628 +0000 UTC Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.088478 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da0fb1f07005bffb8bc7e19057bd0a85e5175716a71fbefd75e67e57da13c9b2"} Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.088549 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.088550 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.089729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.230255 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.363745 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.387248 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.389354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.389401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.389418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.389448 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.877626 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.877783 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.878829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.878950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.879008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:54 crc kubenswrapper[4792]: I0216 21:37:54.948487 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:19:15.960212063 +0000 UTC Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.090944 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.092159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.092382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.092564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.406240 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.406810 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.408945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.409147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.409339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:55 crc kubenswrapper[4792]: I0216 21:37:55.948935 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:11:25.133496929 +0000 UTC Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.094086 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.095245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.095301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.095322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.526741 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.526988 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.528485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.528528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.528552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:56 crc kubenswrapper[4792]: I0216 21:37:56.950309 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:32:25.366345622 +0000 UTC Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.552448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.552763 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.554359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.554406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.554420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.559530 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.795592 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:57 crc kubenswrapper[4792]: I0216 21:37:57.951304 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:17:33.97280736 +0000 UTC Feb 16 21:37:58 crc kubenswrapper[4792]: E0216 21:37:58.079400 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 21:37:58 crc kubenswrapper[4792]: I0216 21:37:58.098368 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:58 crc kubenswrapper[4792]: I0216 21:37:58.099400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:58 crc kubenswrapper[4792]: I0216 21:37:58.099438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:58 crc kubenswrapper[4792]: I0216 21:37:58.099451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:58 crc kubenswrapper[4792]: I0216 21:37:58.952280 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:43:37.159317134 +0000 UTC Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.101781 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.103798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.103911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.103941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.109073 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.276972 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.930408 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:37:59 crc kubenswrapper[4792]: I0216 21:37:59.952910 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:14:06.484965963 +0000 UTC Feb 16 21:38:00 crc kubenswrapper[4792]: I0216 21:38:00.104445 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:00 crc kubenswrapper[4792]: I0216 21:38:00.105854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:00 crc kubenswrapper[4792]: I0216 21:38:00.105911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:00 crc kubenswrapper[4792]: I0216 21:38:00.105935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:00 crc kubenswrapper[4792]: I0216 21:38:00.953723 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:43:46.364495741 +0000 UTC Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.107091 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.107947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.107989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.108001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:01 crc kubenswrapper[4792]: W0216 21:38:01.584303 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.584487 4792 trace.go:236] Trace[810019636]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 21:37:51.582) (total time: 10001ms): Feb 16 21:38:01 crc kubenswrapper[4792]: Trace[810019636]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:38:01.584) Feb 16 21:38:01 crc kubenswrapper[4792]: Trace[810019636]: [10.001516555s] [10.001516555s] END Feb 16 21:38:01 crc kubenswrapper[4792]: E0216 21:38:01.584532 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.942766 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 21:38:01 crc kubenswrapper[4792]: I0216 21:38:01.954514 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:45:47.006945162 +0000 UTC Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.110988 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.117006 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728" exitCode=255 Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.117063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728"} Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.117263 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.118415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.118459 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.118471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.118982 4792 scope.go:117] "RemoveContainer" containerID="1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728" Feb 16 21:38:02 crc kubenswrapper[4792]: W0216 21:38:02.504199 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.504314 4792 trace.go:236] Trace[397197862]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 21:37:52.503) (total time: 10001ms): Feb 16 21:38:02 crc kubenswrapper[4792]: Trace[397197862]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:38:02.504) Feb 16 21:38:02 crc kubenswrapper[4792]: Trace[397197862]: [10.001249378s] [10.001249378s] END Feb 16 21:38:02 crc kubenswrapper[4792]: E0216 21:38:02.504342 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 21:38:02 crc kubenswrapper[4792]: W0216 21:38:02.520408 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.520509 4792 trace.go:236] Trace[895726847]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 21:37:52.518) (total time: 10001ms): Feb 16 21:38:02 crc kubenswrapper[4792]: Trace[895726847]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:38:02.520) Feb 16 21:38:02 crc kubenswrapper[4792]: Trace[895726847]: [10.001701941s] [10.001701941s] END Feb 16 21:38:02 crc kubenswrapper[4792]: E0216 21:38:02.520534 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.666336 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.666416 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.674496 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.674552 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.931050 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.931112 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:38:02 crc kubenswrapper[4792]: I0216 21:38:02.954931 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:33:41.158746796 +0000 UTC Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.121003 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.123110 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7"} Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.123241 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.123933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.123968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.123978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.930653 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.930888 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.931943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.932046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.932132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.955538 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:29:50.14699006 +0000 UTC Feb 16 21:38:03 crc kubenswrapper[4792]: I0216 21:38:03.961953 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.125948 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.127350 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.127482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.127576 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.146468 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 21:38:04 crc kubenswrapper[4792]: I0216 21:38:04.956526 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:47:18.393608623 +0000 UTC Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.128880 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.130335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.130403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.130423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.411722 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.411854 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.411964 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.412757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.412780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.412788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.419030 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:05 crc kubenswrapper[4792]: I0216 21:38:05.956754 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:36:06.865936601 +0000 UTC Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.133379 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.134542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.134764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.134879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.957268 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:25:13.656675185 +0000 UTC Feb 16 21:38:06 crc kubenswrapper[4792]: I0216 21:38:06.969036 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.136113 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.137078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.137127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.137147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.209415 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.532472 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:38:07 crc kubenswrapper[4792]: E0216 21:38:07.646974 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.649586 4792 trace.go:236] Trace[1760826866]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 21:37:57.315) (total time: 10334ms): Feb 16 21:38:07 crc kubenswrapper[4792]: Trace[1760826866]: ---"Objects listed" error: 10334ms (21:38:07.649) Feb 16 21:38:07 crc kubenswrapper[4792]: Trace[1760826866]: [10.334417449s] [10.334417449s] END Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.649671 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.651264 4792 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 21:38:07 crc kubenswrapper[4792]: E0216 21:38:07.653190 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.673024 4792 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.944798 4792 apiserver.go:52] "Watching apiserver" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.946498 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.946753 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947076 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:07 crc kubenswrapper[4792]: E0216 21:38:07.947244 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947109 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947141 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:07 crc kubenswrapper[4792]: E0216 21:38:07.947633 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947251 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:07 crc kubenswrapper[4792]: E0216 21:38:07.947681 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947265 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.947081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.948992 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949107 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949105 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949382 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949394 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949427 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.949966 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.950897 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.951163 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.957978 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:25:59.088187825 +0000 UTC Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.983841 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:07 crc kubenswrapper[4792]: I0216 21:38:07.994235 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.001954 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.010584 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.019023 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.027862 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.038210 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.045137 4792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054005 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054107 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054146 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054235 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054258 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054278 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054341 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054369 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054414 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054436 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054440 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054462 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054493 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054528 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054553 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054547 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054565 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054576 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054641 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054677 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054704 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054717 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054729 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054752 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054807 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054857 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054878 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054933 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054952 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054968 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.054984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055000 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055015 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055031 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055049 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055082 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055081 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055173 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055200 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055223 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055247 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055270 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055292 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055287 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055359 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055362 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055398 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055426 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055455 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055478 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055525 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055547 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055569 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055588 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055629 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055684 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055779 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055802 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055847 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055894 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055936 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055956 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056060 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056083 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056103 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056123 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056173 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056195 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056219 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056243 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056267 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056293 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056317 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056337 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056358 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056385 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056404 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056422 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056440 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056460 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056531 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056552 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056647 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056666 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056733 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056757 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056783 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056805 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056828 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056850 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056872 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056895 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056915 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056941 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056971 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056996 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057021 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057046 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057070 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057093 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057115 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057156 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057177 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057199 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057246 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057294 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057318 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057340 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057389 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057412 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057436 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057462 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057511 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057535 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057582 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057661 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057759 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057781 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057806 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057836 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057884 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057910 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057957 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057980 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058002 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058026 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058075 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058120 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058142 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058165 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058210 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058233 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055426 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055521 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055587 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.055712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056186 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056391 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056861 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.056872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057241 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057420 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057626 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057842 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057903 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.057913 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058486 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058485 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058556 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058707 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058703 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058782 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058864 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059018 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058972 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059158 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059272 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059398 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.059484 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:08.559463897 +0000 UTC m=+21.212742868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059514 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.059967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060081 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060234 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060562 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060590 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060632 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060847 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.060921 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061156 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061342 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061463 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061552 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061664 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.061936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.058256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063295 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063345 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063382 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063397 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063415 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063432 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063448 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063480 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063496 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063511 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063526 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063633 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063657 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063806 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063822 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063839 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063856 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063873 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063964 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.063985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064055 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064087 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064228 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064294 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064310 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064323 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064335 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064346 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064358 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064370 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064414 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064638 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064679 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064694 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064706 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064745 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064757 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064768 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064779 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064790 4792 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064803 4792 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064815 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064827 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064839 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064850 4792 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064863 4792 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064875 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064887 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064899 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064911 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064938 4792 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064952 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064965 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064977 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064988 4792 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065000 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065508 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065527 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065539 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065551 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065562 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066791 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066816 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067123 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067145 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067163 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067184 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067196 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067206 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067215 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067225 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067236 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067250 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067262 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067271 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067280 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067290 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067299 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067309 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067319 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067328 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067336 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067345 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067355 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067364 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067384 4792 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067395 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067405 4792 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067416 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067426 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067435 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067444 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064860 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064884 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.064965 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065123 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065152 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065382 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065388 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065764 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065782 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065811 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066022 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066013 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066387 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066682 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.066730 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.067122 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.065847 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.068700 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.068979 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069014 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069324 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069449 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069503 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069525 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069544 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069687 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069719 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.069984 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070136 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070224 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070517 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070232 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070545 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070838 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070843 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070865 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070844 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.070895 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071273 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071234 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071355 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071443 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071494 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071537 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071558 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071830 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071984 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.072082 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.071965 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.072506 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.072571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.072829 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.073016 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.073122 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:08.573106266 +0000 UTC m=+21.226385157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.073483 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.073731 4792 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.074076 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.076247 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.076761 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.076842 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:08.576824369 +0000 UTC m=+21.230103260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.076971 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.077126 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.077305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.077853 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.078348 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.078634 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.078935 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.078972 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.079107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.079418 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.079466 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.079867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080025 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080149 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080413 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080268 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080558 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.080893 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.087138 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.095254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.095609 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.097902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.105839 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.108725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.109464 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.110248 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.110270 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.110291 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.112214 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:08.612186052 +0000 UTC m=+21.265464943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.113039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.113148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.113456 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.113464 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.113533 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.113551 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.113628 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:08.613591971 +0000 UTC m=+21.266870862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.113751 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.113833 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.115517 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.116085 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.116752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117114 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117179 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117202 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117266 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117304 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117569 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.117765 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.118263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.118337 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.118931 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.119347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.119434 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.119487 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.119788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.119991 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.120506 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.120723 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.120998 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.121206 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.121592 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.121721 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.121795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.121885 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.122510 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.122923 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.123070 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.123124 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.124406 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.124565 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.124884 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.124985 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.125553 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.129090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.134124 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.135755 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.141624 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.143198 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.150039 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.151679 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.152053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168463 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168536 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168546 4792 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168557 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168565 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168574 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168582 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168590 4792 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168622 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168631 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168640 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168651 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168660 4792 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168669 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168677 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168685 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168693 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168702 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168711 4792 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168725 4792 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168735 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168743 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168751 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168759 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168767 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168776 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168784 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168792 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168800 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168809 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168820 4792 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168828 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168836 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168844 4792 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168852 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168860 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168868 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168718 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168875 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168876 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168939 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168956 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168971 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168986 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.168999 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169014 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169029 4792 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169043 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169057 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169073 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169088 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169103 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169118 4792 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169133 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169149 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169164 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169180 4792 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169194 4792 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169208 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169223 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169234 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169246 4792 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169256 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169270 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169280 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169290 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169301 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169310 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169319 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169327 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169335 4792 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169343 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169351 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169359 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169368 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169376 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169384 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169393 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169401 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169410 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169417 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169426 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169434 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169442 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169450 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169458 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169466 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169475 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169485 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169496 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169507 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169518 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169529 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169539 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169550 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169565 4792 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169576 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169587 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169618 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169629 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169640 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169650 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169661 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169672 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169680 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169689 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169697 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169705 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169713 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169722 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169730 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169738 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169747 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169755 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169763 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169772 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169780 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169788 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169795 4792 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169804 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169812 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169820 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169829 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169838 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169845 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169853 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.169862 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.265738 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.271191 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.276447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 21:38:08 crc kubenswrapper[4792]: W0216 21:38:08.281363 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-71caddd0eb3fcfa873f19cc8f9a277b0cfc5416083c3823329b0d80c064b9e59 WatchSource:0}: Error finding container 71caddd0eb3fcfa873f19cc8f9a277b0cfc5416083c3823329b0d80c064b9e59: Status 404 returned error can't find the container with id 71caddd0eb3fcfa873f19cc8f9a277b0cfc5416083c3823329b0d80c064b9e59 Feb 16 21:38:08 crc kubenswrapper[4792]: W0216 21:38:08.293817 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-aa34384d7cc0aed2e9c388f2f5b7cf7d96ea9a9969b7b81453186388401f4f82 WatchSource:0}: Error finding container aa34384d7cc0aed2e9c388f2f5b7cf7d96ea9a9969b7b81453186388401f4f82: Status 404 returned error can't find the container with id aa34384d7cc0aed2e9c388f2f5b7cf7d96ea9a9969b7b81453186388401f4f82 Feb 16 21:38:08 crc kubenswrapper[4792]: W0216 21:38:08.300311 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fc2cc17fbf421517bda735f7b0574b1e977a7b4a0b105f98ff1684fbf286c4a3 WatchSource:0}: Error finding container fc2cc17fbf421517bda735f7b0574b1e977a7b4a0b105f98ff1684fbf286c4a3: Status 404 returned error can't find the container with id fc2cc17fbf421517bda735f7b0574b1e977a7b4a0b105f98ff1684fbf286c4a3 Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.573326 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.573484 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:09.573441903 +0000 UTC m=+22.226720784 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.573725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.573882 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.573953 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:09.573941607 +0000 UTC m=+22.227220498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.674283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.674557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.674691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.674504 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.674864 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.674930 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.674956 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.674891 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.675080 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.675096 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.675053 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:09.674893263 +0000 UTC m=+22.328172154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.675329 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:09.675316104 +0000 UTC m=+22.328594995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: E0216 21:38:08.675422 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:09.675411027 +0000 UTC m=+22.328689928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:08 crc kubenswrapper[4792]: I0216 21:38:08.958734 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:01:03.197205622 +0000 UTC Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.025925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.026110 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.145396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.145446 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.145456 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fc2cc17fbf421517bda735f7b0574b1e977a7b4a0b105f98ff1684fbf286c4a3"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.146641 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.146669 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aa34384d7cc0aed2e9c388f2f5b7cf7d96ea9a9969b7b81453186388401f4f82"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.147506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"71caddd0eb3fcfa873f19cc8f9a277b0cfc5416083c3823329b0d80c064b9e59"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.148900 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.149794 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.151929 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" exitCode=255 Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.151964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7"} Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.152019 4792 scope.go:117] "RemoveContainer" containerID="1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.162157 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.162534 4792 scope.go:117] "RemoveContainer" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.162785 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.164504 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.174849 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.189794 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.228922 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.244048 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.257987 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.270684 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.283289 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.297916 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.316825 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:01Z\\\",\\\"message\\\":\\\"W0216 21:37:51.108754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 21:37:51.109040 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771277871 cert, and key in /tmp/serving-cert-1321656732/serving-signer.crt, /tmp/serving-cert-1321656732/serving-signer.key\\\\nI0216 21:37:51.421005 1 observer_polling.go:159] Starting file observer\\\\nW0216 21:37:51.423938 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 21:37:51.424119 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:37:51.425963 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1321656732/tls.crt::/tmp/serving-cert-1321656732/tls.key\\\\\\\"\\\\nF0216 21:38:01.682948 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.331216 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.345766 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.357381 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.581945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.582076 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:11.582047527 +0000 UTC m=+24.235326458 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.582111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.582217 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.582265 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:11.582254803 +0000 UTC m=+24.235533694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.682788 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.682832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.682851 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.682942 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.682956 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.682965 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683001 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683010 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:11.682997303 +0000 UTC m=+24.336276194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683084 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:11.683065625 +0000 UTC m=+24.336344556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683184 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683203 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683220 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:09 crc kubenswrapper[4792]: E0216 21:38:09.683284 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:11.68326186 +0000 UTC m=+24.336540781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.933688 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.938375 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.942026 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.949729 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.959581 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:57:06.435074175 +0000 UTC Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.964019 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.976355 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.988482 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:01Z\\\",\\\"message\\\":\\\"W0216 21:37:51.108754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 21:37:51.109040 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771277871 cert, and key in /tmp/serving-cert-1321656732/serving-signer.crt, /tmp/serving-cert-1321656732/serving-signer.key\\\\nI0216 21:37:51.421005 1 observer_polling.go:159] Starting file observer\\\\nW0216 21:37:51.423938 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 21:37:51.424119 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:37:51.425963 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1321656732/tls.crt::/tmp/serving-cert-1321656732/tls.key\\\\\\\"\\\\nF0216 21:38:01.682948 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:09 crc kubenswrapper[4792]: I0216 21:38:09.999676 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:09Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.012129 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.024179 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.025235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.025315 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:10 crc kubenswrapper[4792]: E0216 21:38:10.025380 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:10 crc kubenswrapper[4792]: E0216 21:38:10.025448 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.029498 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.030333 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.031096 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.036389 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.037115 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.037836 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.038450 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.039086 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.039754 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.040327 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.041890 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.042931 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.043702 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.044447 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.045221 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.045993 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.046824 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.047451 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.048115 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1154a1f9f2c42730125bdfee77d0110af10ce59c5dbbcca2dcae48bd56520728\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:01Z\\\",\\\"message\\\":\\\"W0216 21:37:51.108754 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 21:37:51.109040 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771277871 cert, and key in /tmp/serving-cert-1321656732/serving-signer.crt, /tmp/serving-cert-1321656732/serving-signer.key\\\\nI0216 21:37:51.421005 1 observer_polling.go:159] Starting file observer\\\\nW0216 21:37:51.423938 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 21:37:51.424119 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:37:51.425963 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1321656732/tls.crt::/tmp/serving-cert-1321656732/tls.key\\\\\\\"\\\\nF0216 21:38:01.682948 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.048361 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.049347 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.050180 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.051162 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.052466 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.053682 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.054691 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.055396 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.056170 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.056690 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.057259 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.057756 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.058227 4792 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.058325 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.059563 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.062027 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.062693 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.064591 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.065891 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.066562 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.067982 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.068041 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.068837 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.069956 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.070704 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.076128 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.077178 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.077944 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.078922 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.079545 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.080808 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.081390 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.082650 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.083223 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.083903 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.084248 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.085081 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.085718 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.099263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.110869 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.123733 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.135731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.149410 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.155367 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.157567 4792 scope.go:117] "RemoveContainer" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" Feb 16 21:38:10 crc kubenswrapper[4792]: E0216 21:38:10.157764 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.167388 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.178191 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.189221 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.209661 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.233845 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.244908 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.255530 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.272989 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:10Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:10 crc kubenswrapper[4792]: I0216 21:38:10.960231 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:58:34.956088275 +0000 UTC Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.026046 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.026208 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.161282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed"} Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.178585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.199321 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.216251 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.233968 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.250632 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.269752 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.286258 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.309204 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:11Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.599705 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.599996 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:15.599960136 +0000 UTC m=+28.253239067 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.600451 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.600703 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.600798 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:15.600780849 +0000 UTC m=+28.254059770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.701718 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.701792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.701832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701797 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701870 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701888 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701899 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701945 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:15.70193293 +0000 UTC m=+28.355211821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.701958 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:15.701953191 +0000 UTC m=+28.355232082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.702004 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.702045 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.702069 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:11 crc kubenswrapper[4792]: E0216 21:38:11.702145 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:15.702127156 +0000 UTC m=+28.355406077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:11 crc kubenswrapper[4792]: I0216 21:38:11.960618 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:56:03.613790649 +0000 UTC Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.025503 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.025503 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.025703 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.025856 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.068200 4792 csr.go:261] certificate signing request csr-2wppg is approved, waiting to be issued Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.097160 4792 csr.go:257] certificate signing request csr-2wppg is issued Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.876033 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-szmc4"] Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.876350 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2vlsf"] Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.876502 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-554x7"] Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.876550 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.876647 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.877449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.878419 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mp8ql"] Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.878705 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mp8ql" Feb 16 21:38:12 crc kubenswrapper[4792]: W0216 21:38:12.878875 4792 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.878926 4792 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 21:38:12 crc kubenswrapper[4792]: W0216 21:38:12.878877 4792 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.878973 4792 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.879443 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:38:12 crc kubenswrapper[4792]: W0216 21:38:12.880951 4792 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.880992 4792 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881015 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881046 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:38:12 crc kubenswrapper[4792]: W0216 21:38:12.881081 4792 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 21:38:12 crc kubenswrapper[4792]: E0216 21:38:12.881093 4792 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881057 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881446 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881507 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881732 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881760 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881788 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.881460 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.882063 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.898844 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.916840 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.934791 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.950647 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.961744 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:34:27.35237761 +0000 UTC Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.966665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.983159 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:12 crc kubenswrapper[4792]: I0216 21:38:12.992943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:12Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.003150 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012518 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cni-binary-copy\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012561 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-bin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f759c59-befa-4d12-ab4b-c4e579fba2bd-rootfs\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012642 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f759c59-befa-4d12-ab4b-c4e579fba2bd-mcd-auth-proxy-config\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-os-release\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012714 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-system-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-multus\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-hostroot\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012827 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-conf-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r4n9\" (UniqueName: \"kubernetes.io/projected/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-kube-api-access-9r4n9\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012881 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhwqj\" (UniqueName: \"kubernetes.io/projected/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-kube-api-access-xhwqj\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.012998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013087 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013160 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cnibin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcrm\" (UniqueName: \"kubernetes.io/projected/5f759c59-befa-4d12-ab4b-c4e579fba2bd-kube-api-access-clcrm\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013272 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-os-release\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013321 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-daemon-config\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013346 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-socket-dir-parent\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cnibin\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-etc-kubernetes\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f759c59-befa-4d12-ab4b-c4e579fba2bd-proxy-tls\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013511 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-k8s-cni-cncf-io\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-netns\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-kubelet\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013620 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-hosts-file\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013652 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svsrp\" (UniqueName: \"kubernetes.io/projected/3f2095e9-5a78-45fb-a930-eacbd54ec73d-kube-api-access-svsrp\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-system-cni-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013682 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.013733 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-multus-certs\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.020030 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.025987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:13 crc kubenswrapper[4792]: E0216 21:38:13.026088 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.036338 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.046880 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.060218 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.074402 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.084619 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.093382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.098239 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 21:33:12 +0000 UTC, rotation deadline is 2026-12-01 04:11:12.499839421 +0000 UTC Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.098388 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6894h32m59.401455587s for next certificate rotation Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.102426 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.110668 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-os-release\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cni-binary-copy\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114855 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-bin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f759c59-befa-4d12-ab4b-c4e579fba2bd-rootfs\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114995 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f759c59-befa-4d12-ab4b-c4e579fba2bd-rootfs\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f759c59-befa-4d12-ab4b-c4e579fba2bd-mcd-auth-proxy-config\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.114971 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-bin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-os-release\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115127 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-system-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115088 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-system-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115165 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-multus\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-hostroot\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r4n9\" (UniqueName: \"kubernetes.io/projected/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-kube-api-access-9r4n9\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-conf-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhwqj\" (UniqueName: \"kubernetes.io/projected/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-kube-api-access-xhwqj\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-hostroot\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115302 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cnibin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-conf-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-cni-multus\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115321 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clcrm\" (UniqueName: \"kubernetes.io/projected/5f759c59-befa-4d12-ab4b-c4e579fba2bd-kube-api-access-clcrm\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115357 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-os-release\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-socket-dir-parent\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-daemon-config\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115465 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cnibin\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-os-release\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115500 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cnibin\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-socket-dir-parent\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115609 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cnibin\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.115722 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-cni-binary-copy\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116082 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-etc-kubernetes\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116114 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-netns\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-etc-kubernetes\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116135 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f759c59-befa-4d12-ab4b-c4e579fba2bd-proxy-tls\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-k8s-cni-cncf-io\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116170 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-netns\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116187 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-daemon-config\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-kubelet\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-system-cni-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-k8s-cni-cncf-io\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-hosts-file\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116273 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svsrp\" (UniqueName: \"kubernetes.io/projected/3f2095e9-5a78-45fb-a930-eacbd54ec73d-kube-api-access-svsrp\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-system-cni-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116313 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116319 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-var-lib-kubelet\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-hosts-file\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-multus-certs\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-host-run-multus-certs\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116454 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f2095e9-5a78-45fb-a930-eacbd54ec73d-multus-cni-dir\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.116853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.117040 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f759c59-befa-4d12-ab4b-c4e579fba2bd-mcd-auth-proxy-config\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.119583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f759c59-befa-4d12-ab4b-c4e579fba2bd-proxy-tls\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.129475 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.130196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhwqj\" (UniqueName: \"kubernetes.io/projected/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-kube-api-access-xhwqj\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.132668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clcrm\" (UniqueName: \"kubernetes.io/projected/5f759c59-befa-4d12-ab4b-c4e579fba2bd-kube-api-access-clcrm\") pod \"machine-config-daemon-szmc4\" (UID: \"5f759c59-befa-4d12-ab4b-c4e579fba2bd\") " pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.133065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svsrp\" (UniqueName: \"kubernetes.io/projected/3f2095e9-5a78-45fb-a930-eacbd54ec73d-kube-api-access-svsrp\") pod \"multus-mp8ql\" (UID: \"3f2095e9-5a78-45fb-a930-eacbd54ec73d\") " pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.140863 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.151471 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.161192 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.192641 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:38:13 crc kubenswrapper[4792]: W0216 21:38:13.201988 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f759c59_befa_4d12_ab4b_c4e579fba2bd.slice/crio-10a4b682d456892fa733b0644741a4680fde1a5865e36243c4bdd88eec49e2ea WatchSource:0}: Error finding container 10a4b682d456892fa733b0644741a4680fde1a5865e36243c4bdd88eec49e2ea: Status 404 returned error can't find the container with id 10a4b682d456892fa733b0644741a4680fde1a5865e36243c4bdd88eec49e2ea Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.219207 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mp8ql" Feb 16 21:38:13 crc kubenswrapper[4792]: W0216 21:38:13.239900 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f2095e9_5a78_45fb_a930_eacbd54ec73d.slice/crio-9bcc3d88bc321ff75a81702cb3d4f6c6549d73981da3a1dcb9e91c4b09ebd5c1 WatchSource:0}: Error finding container 9bcc3d88bc321ff75a81702cb3d4f6c6549d73981da3a1dcb9e91c4b09ebd5c1: Status 404 returned error can't find the container with id 9bcc3d88bc321ff75a81702cb3d4f6c6549d73981da3a1dcb9e91c4b09ebd5c1 Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.245009 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rfdc5"] Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.245973 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.247977 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.248045 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.248086 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.248162 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.248190 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.248768 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.251485 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.264206 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.275864 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.289215 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.301885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.316021 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318641 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vfrl\" (UniqueName: \"kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318739 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318767 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318885 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.318940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320034 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320290 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320411 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.320541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.334352 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.348767 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.365119 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.377488 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.393382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.407209 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.419226 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421183 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421216 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vfrl\" (UniqueName: \"kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421276 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421311 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421348 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421354 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421369 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421473 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421528 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421545 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421579 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421627 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421658 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421692 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421707 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421728 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421758 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421774 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.421793 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.422054 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.422111 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.422117 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.426618 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.432849 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:13Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.438499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vfrl\" (UniqueName: \"kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl\") pod \"ovnkube-node-rfdc5\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.573427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:13 crc kubenswrapper[4792]: W0216 21:38:13.584451 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616c8c01_b6e2_4851_9729_888790cbbe63.slice/crio-a35635598cbc2064aefc74b1ab85e0ab16ce48e4291a955ab13a2d8b62812e9d WatchSource:0}: Error finding container a35635598cbc2064aefc74b1ab85e0ab16ce48e4291a955ab13a2d8b62812e9d: Status 404 returned error can't find the container with id a35635598cbc2064aefc74b1ab85e0ab16ce48e4291a955ab13a2d8b62812e9d Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.743555 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:38:13 crc kubenswrapper[4792]: I0216 21:38:13.961924 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:14:47.496708777 +0000 UTC Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.025687 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.025728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.025810 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.026133 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.053653 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.055180 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.055219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.055228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.055314 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.060276 4792 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.060540 4792 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.061408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.061453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.061461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.061474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.061483 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.077031 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.079918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.079962 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.079971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.079988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.079997 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.095828 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.098783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.098819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.098829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.098843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.098853 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.110747 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.113399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.113434 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.113443 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.113456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.113467 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.115501 4792 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.115578 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist podName:67a11891-bd2f-46f7-beb7-7d1d70b3e6a2 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:14.615559568 +0000 UTC m=+27.268838459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-554x7" (UID: "67a11891-bd2f-46f7-beb7-7d1d70b3e6a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.125804 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.129055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.129094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.129130 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.129148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.129158 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.141177 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.141296 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.142725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.142763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.142772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.142787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.142798 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.168364 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0" exitCode=0 Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.168450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.168494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"a35635598cbc2064aefc74b1ab85e0ab16ce48e4291a955ab13a2d8b62812e9d"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.169591 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.169827 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerStarted","Data":"14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.169858 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerStarted","Data":"9bcc3d88bc321ff75a81702cb3d4f6c6549d73981da3a1dcb9e91c4b09ebd5c1"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.171606 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.171633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.171642 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"10a4b682d456892fa733b0644741a4680fde1a5865e36243c4bdd88eec49e2ea"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.182899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.199456 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.218744 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.228217 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.232628 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.240928 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.247746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.247796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.247812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.247835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.247851 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.249438 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.251240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r4n9\" (UniqueName: \"kubernetes.io/projected/d6da7745-c9c0-44c9-93e5-77cc1dd1d074-kube-api-access-9r4n9\") pod \"node-resolver-2vlsf\" (UID: \"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\") " pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.260183 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.270205 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.280165 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.292546 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.303681 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.315012 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.328388 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.341885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.350422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.350478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.350500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.350523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.350538 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.357394 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.372575 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.388148 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.398430 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.402370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2vlsf" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.410402 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: W0216 21:38:14.415679 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6da7745_c9c0_44c9_93e5_77cc1dd1d074.slice/crio-d8c5cfe34628038f303d317031f03d86f9bc98e8fe29583a357ff86c0c326a1b WatchSource:0}: Error finding container d8c5cfe34628038f303d317031f03d86f9bc98e8fe29583a357ff86c0c326a1b: Status 404 returned error can't find the container with id d8c5cfe34628038f303d317031f03d86f9bc98e8fe29583a357ff86c0c326a1b Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.424326 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.436375 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.452643 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.452685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.452696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.452726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.452738 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.455029 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.474483 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.486590 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.496754 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.506309 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.516329 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:14Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.554802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.554845 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.554855 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.554871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.554881 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.631858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.632573 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/67a11891-bd2f-46f7-beb7-7d1d70b3e6a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-554x7\" (UID: \"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\") " pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.657733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.657781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.657796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.657816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.657831 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.712395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-554x7" Feb 16 21:38:14 crc kubenswrapper[4792]: W0216 21:38:14.726696 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a11891_bd2f_46f7_beb7_7d1d70b3e6a2.slice/crio-d406f5fea9458ad713c25ea89f5027d42f8a35b29c21ffd2a8d36c622a9eef8f WatchSource:0}: Error finding container d406f5fea9458ad713c25ea89f5027d42f8a35b29c21ffd2a8d36c622a9eef8f: Status 404 returned error can't find the container with id d406f5fea9458ad713c25ea89f5027d42f8a35b29c21ffd2a8d36c622a9eef8f Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.760224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.760261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.760270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.760283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.760293 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.773893 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.774470 4792 scope.go:117] "RemoveContainer" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" Feb 16 21:38:14 crc kubenswrapper[4792]: E0216 21:38:14.774660 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.862689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.862723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.862731 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.862743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.862776 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.962311 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:59:33.592401461 +0000 UTC Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.965548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.965583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.965615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.965635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:14 crc kubenswrapper[4792]: I0216 21:38:14.965646 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:14Z","lastTransitionTime":"2026-02-16T21:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.025767 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.025887 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.068504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.068540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.068550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.068571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.068585 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.170160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.170497 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.170510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.170526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.170538 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177428 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.177438 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.179178 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8" exitCode=0 Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.179227 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.179243 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerStarted","Data":"d406f5fea9458ad713c25ea89f5027d42f8a35b29c21ffd2a8d36c622a9eef8f"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.180481 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2vlsf" event={"ID":"d6da7745-c9c0-44c9-93e5-77cc1dd1d074","Type":"ContainerStarted","Data":"494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.180509 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2vlsf" event={"ID":"d6da7745-c9c0-44c9-93e5-77cc1dd1d074","Type":"ContainerStarted","Data":"d8c5cfe34628038f303d317031f03d86f9bc98e8fe29583a357ff86c0c326a1b"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.192612 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.205913 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.228234 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.254730 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.267078 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.272374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.272406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.272416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.272430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.272440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.278713 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.289050 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.299101 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.315706 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.330558 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.342061 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.354377 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.365697 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.374329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.374366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.374377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.374395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.374406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.377202 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.390250 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.401738 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.413680 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.424614 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.437411 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.457827 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dgz2t"] Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.458256 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.459263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.460185 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.460206 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.460260 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.460439 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.472581 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.476032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.476068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.476080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.476097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.476108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.482741 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.493737 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.509193 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.518058 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.529802 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.540320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51960a32-12c3-4050-99da-f97649c432c0-serviceca\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.540376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rr5h\" (UniqueName: \"kubernetes.io/projected/51960a32-12c3-4050-99da-f97649c432c0-kube-api-access-5rr5h\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.540441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51960a32-12c3-4050-99da-f97649c432c0-host\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.541103 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.551761 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.561823 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.574280 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.577505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.577538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.577547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.577563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.577571 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.592057 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.601118 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.613715 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.624508 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.634301 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rr5h\" (UniqueName: \"kubernetes.io/projected/51960a32-12c3-4050-99da-f97649c432c0-kube-api-access-5rr5h\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.641392 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:23.641356608 +0000 UTC m=+36.294635509 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51960a32-12c3-4050-99da-f97649c432c0-host\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51960a32-12c3-4050-99da-f97649c432c0-serviceca\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.641642 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51960a32-12c3-4050-99da-f97649c432c0-host\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.641730 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.641784 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:23.641773099 +0000 UTC m=+36.295051990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.642568 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/51960a32-12c3-4050-99da-f97649c432c0-serviceca\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.644551 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.658235 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.660575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rr5h\" (UniqueName: \"kubernetes.io/projected/51960a32-12c3-4050-99da-f97649c432c0-kube-api-access-5rr5h\") pod \"node-ca-dgz2t\" (UID: \"51960a32-12c3-4050-99da-f97649c432c0\") " pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.671419 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.680053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.680086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.680094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.680119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.680128 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.681135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.692311 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.742930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.742990 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.743038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743038 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743113 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:23.743096655 +0000 UTC m=+36.396375546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743152 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743171 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743183 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743202 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743238 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:23.743223738 +0000 UTC m=+36.396502639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743241 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743266 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:15 crc kubenswrapper[4792]: E0216 21:38:15.743329 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:23.743305311 +0000 UTC m=+36.396584242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.770626 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dgz2t" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.782227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.782280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.782293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.782309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.782727 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: W0216 21:38:15.784312 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51960a32_12c3_4050_99da_f97649c432c0.slice/crio-807cee36d0c74b6ecf0a86adb5a55e7a5d2365def950880d0fb75cbf652d9b34 WatchSource:0}: Error finding container 807cee36d0c74b6ecf0a86adb5a55e7a5d2365def950880d0fb75cbf652d9b34: Status 404 returned error can't find the container with id 807cee36d0c74b6ecf0a86adb5a55e7a5d2365def950880d0fb75cbf652d9b34 Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.885686 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.885730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.885742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.885758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.885769 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.962495 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:19:17.080808259 +0000 UTC Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.988196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.988471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.988480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.988494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:15 crc kubenswrapper[4792]: I0216 21:38:15.988502 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:15Z","lastTransitionTime":"2026-02-16T21:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.025624 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.025630 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:16 crc kubenswrapper[4792]: E0216 21:38:16.025889 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:16 crc kubenswrapper[4792]: E0216 21:38:16.025743 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.090738 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.090793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.090806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.090825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.090836 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.187713 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92" exitCode=0 Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.187770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.190168 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dgz2t" event={"ID":"51960a32-12c3-4050-99da-f97649c432c0","Type":"ContainerStarted","Data":"02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.190220 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dgz2t" event={"ID":"51960a32-12c3-4050-99da-f97649c432c0","Type":"ContainerStarted","Data":"807cee36d0c74b6ecf0a86adb5a55e7a5d2365def950880d0fb75cbf652d9b34"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.192703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.192732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.192740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.192751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.192761 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.206036 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.221706 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.233933 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.248970 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.266213 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.278179 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.292214 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.294526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.294562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.294571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.294584 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.294608 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.311205 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.324585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.336333 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.348998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.360658 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.370356 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.380811 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.394072 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.396256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.396284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.396293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.396308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.396318 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.407159 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.419214 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.435616 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.446834 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.457513 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.487553 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.498249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.498275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.498284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.498298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.498307 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.532762 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.570407 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.600609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.600638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.600645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.600660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.600671 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.608458 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.645381 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.688567 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.702590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.702659 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.702674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.702696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.702712 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.726585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.765026 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:16Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.804885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.804937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.804953 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.804976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.804994 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.908092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.908167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.908191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.908221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.908244 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:16Z","lastTransitionTime":"2026-02-16T21:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:16 crc kubenswrapper[4792]: I0216 21:38:16.963531 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:00:44.060685054 +0000 UTC Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.011368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.011442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.011468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.011505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.011539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.025825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:17 crc kubenswrapper[4792]: E0216 21:38:17.025964 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.115014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.115076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.115098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.115127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.115149 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.196504 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13" exitCode=0 Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.196560 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.217670 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.217714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.217725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.217746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.217761 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.222115 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.242022 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.259579 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.282943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.294353 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.307800 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.320946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.320975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.320983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.320996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.321004 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.321527 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.334184 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.344363 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.359783 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.374920 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.386590 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.398325 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.409112 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:17Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.422405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.422437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.422447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.422463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.422473 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.525062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.525102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.525117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.525129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.525138 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.627550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.627618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.627630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.627646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.627703 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.730245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.730287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.730299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.730315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.730326 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.800088 4792 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.832471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.832514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.832525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.832541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.832554 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.934830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.934862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.934872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.934884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.934893 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:17Z","lastTransitionTime":"2026-02-16T21:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:17 crc kubenswrapper[4792]: I0216 21:38:17.964361 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:19:53.32620102 +0000 UTC Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.026193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.026410 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:18 crc kubenswrapper[4792]: E0216 21:38:18.026692 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:18 crc kubenswrapper[4792]: E0216 21:38:18.026753 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.037380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.037419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.037433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.037447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.037458 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.044776 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.061480 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.074006 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.091474 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.113088 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.130131 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.142969 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.150428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.150465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.150478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.150498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.150514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.155830 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.166667 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.176878 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.188399 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.200729 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.202478 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253" exitCode=0 Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.202588 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.212961 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.217993 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.227248 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.243829 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.253071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.253141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.253153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.253170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.253215 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.259050 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.277643 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.296084 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.307510 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.318847 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.338314 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.349357 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.355408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.355445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.355454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.355469 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.355480 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.359264 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.370520 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.384433 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.397961 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.410807 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.449663 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.457764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.457793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.457804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.457818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.457827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.560637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.560684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.560931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.561232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.561395 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.664393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.664420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.664428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.664441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.664453 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.767010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.767043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.767052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.767066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.767076 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.870002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.870054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.870071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.870120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.870137 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.965112 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:18:08.327296733 +0000 UTC Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.973230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.973270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.973282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.973301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:18 crc kubenswrapper[4792]: I0216 21:38:18.973313 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:18Z","lastTransitionTime":"2026-02-16T21:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.025996 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:19 crc kubenswrapper[4792]: E0216 21:38:19.026208 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.075426 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.075464 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.075472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.075502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.075512 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.178451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.178505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.178520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.178544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.178562 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.222577 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1" exitCode=0 Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.222634 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.238917 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.252161 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.268016 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.280933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.280963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.280973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.280987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.280999 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.285253 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.295961 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.311998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.325289 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.337710 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.348823 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.359513 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.373729 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.382597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.382678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.382688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.382701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.382713 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.384191 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.396134 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.407218 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:19Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.485112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.485153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.485168 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.485185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.485200 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.588183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.588208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.588216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.588230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.588240 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.690278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.690343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.690368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.690399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.690420 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.793038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.793110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.793134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.793163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.793186 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.895354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.895392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.895403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.895419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.895430 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:19Z","lastTransitionTime":"2026-02-16T21:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:19 crc kubenswrapper[4792]: I0216 21:38:19.965709 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:21:01.363674532 +0000 UTC Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.002469 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.002529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.002547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.002574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.002591 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.026112 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:20 crc kubenswrapper[4792]: E0216 21:38:20.026320 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.026471 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:20 crc kubenswrapper[4792]: E0216 21:38:20.027103 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.109166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.109223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.109237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.109255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.109268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.211401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.211453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.211464 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.211478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.211488 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.229377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.229687 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.233663 4792 generic.go:334] "Generic (PLEG): container finished" podID="67a11891-bd2f-46f7-beb7-7d1d70b3e6a2" containerID="cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578" exitCode=0 Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.233697 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerDied","Data":"cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.251283 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.261921 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.268267 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.286554 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.305388 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.313650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.313687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.313696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.313712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.313723 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.317513 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.328818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.339025 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.348475 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.358234 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.369379 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.382129 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.393617 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.409932 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.416488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.416529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.416541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.416561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.416572 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.423912 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.435674 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.445984 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.457443 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.467775 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.477818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.493111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.509665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.519615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.519664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.519678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.519699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.519711 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.525414 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.537138 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.552280 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.564185 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.578041 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.594620 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.622120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.622170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.622233 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.622294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.622378 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.647055 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:20Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.725822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.725920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.725937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.725984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.726000 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.829436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.829482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.829493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.829513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.829528 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.933034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.933098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.933114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.933140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.933157 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:20Z","lastTransitionTime":"2026-02-16T21:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:20 crc kubenswrapper[4792]: I0216 21:38:20.966149 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:08:47.935153586 +0000 UTC Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.025794 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:21 crc kubenswrapper[4792]: E0216 21:38:21.025922 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.035543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.035571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.035581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.035598 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.035623 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.138191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.138223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.138233 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.138248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.138259 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.239474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.239894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.240039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.240204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.240357 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.240539 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.240585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" event={"ID":"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2","Type":"ContainerStarted","Data":"af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.241174 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.253492 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.265027 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.266335 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.278129 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.291558 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.301868 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.318956 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.338083 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.343626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.343659 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.343669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.343684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.343695 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.359128 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.371700 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.384134 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.397926 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.413369 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.430850 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.445699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.445742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.445751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.445767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.445777 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.453342 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.464161 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.476478 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.496533 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.508485 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.521502 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.533082 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.548857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.548893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.548901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.548914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.548925 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.550649 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.561982 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.572366 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.586276 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.598186 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.617095 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.639649 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.651535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.651591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.651629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.651655 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.651671 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.658924 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:21Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.753970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.754029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.754077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.754098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.754113 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.857173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.857490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.857500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.857514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.857523 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.959653 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.959699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.959711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.959725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.959736 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:21Z","lastTransitionTime":"2026-02-16T21:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:21 crc kubenswrapper[4792]: I0216 21:38:21.967312 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:24:10.669931245 +0000 UTC Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.026053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.026110 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:22 crc kubenswrapper[4792]: E0216 21:38:22.026165 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:22 crc kubenswrapper[4792]: E0216 21:38:22.026227 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.062391 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.062435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.062448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.062464 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.062476 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.164828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.164873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.164885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.164900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.164911 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.244287 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.266674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.266739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.266749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.266760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.266769 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.369532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.369574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.369585 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.369607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.369653 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.473159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.473211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.473229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.473251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.473269 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.575963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.576021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.576039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.576062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.576079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.679224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.679264 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.679279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.679310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.679320 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.781505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.781568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.781588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.781662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.781700 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.884478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.884537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.884554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.884594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.884640 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.968152 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:09:19.292182311 +0000 UTC Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.988125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.988169 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.988195 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.988218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:22 crc kubenswrapper[4792]: I0216 21:38:22.988235 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:22Z","lastTransitionTime":"2026-02-16T21:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.025913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.026047 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.090522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.090589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.090651 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.090679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.090704 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.194217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.194268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.194283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.194305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.194325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.250206 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/0.log" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.252729 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760" exitCode=1 Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.252776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.253660 4792 scope.go:117] "RemoveContainer" containerID="97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.266598 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.277372 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.288872 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.296041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.296080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.296089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.296104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.296114 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.303290 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.315167 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.328810 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.341893 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.358068 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.372104 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.388297 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.398076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.398117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.398133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.398153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.398169 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.408405 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.420688 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.434104 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.462547 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:22Z\\\",\\\"message\\\":\\\"es/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 21:38:22.341290 6123 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 21:38:22.341312 6123 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 21:38:22.341330 6123 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 21:38:22.341337 6123 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 21:38:22.341372 6123 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 21:38:22.341385 6123 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 21:38:22.341411 6123 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 21:38:22.341429 6123 factory.go:656] Stopping watch factory\\\\nI0216 21:38:22.341431 6123 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 21:38:22.341437 6123 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 21:38:22.341439 6123 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 21:38:22.341453 6123 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 21:38:22.341444 6123 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 21:38:22.341461 6123 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 21:38:22.341470 6123 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:23Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.500043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.500078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.500086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.500099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.500108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.603636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.603706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.603729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.603759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.603782 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.708102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.708156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.708167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.708187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.708196 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.725547 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.725695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.725757 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:38:39.725733188 +0000 UTC m=+52.379012079 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.725774 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.725823 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:39.7258088 +0000 UTC m=+52.379087751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.811039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.811091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.811102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.811119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.811129 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.827248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.827334 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.827367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827441 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827470 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827481 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827500 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827529 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827541 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827486 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827541 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:39.827522119 +0000 UTC m=+52.480801010 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827612 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:39.8275921 +0000 UTC m=+52.480870991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:23 crc kubenswrapper[4792]: E0216 21:38:23.827650 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:39.827636162 +0000 UTC m=+52.480915113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.914135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.914171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.914180 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.914194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.914204 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:23Z","lastTransitionTime":"2026-02-16T21:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:23 crc kubenswrapper[4792]: I0216 21:38:23.968766 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:21:20.631034276 +0000 UTC Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.017360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.017397 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.017405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.017419 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.017428 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.025745 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.025780 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.025882 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.025998 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.119138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.119182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.119190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.119204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.119215 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.221853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.221909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.221922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.221937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.221947 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.259493 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/1.log" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.260892 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/0.log" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.265566 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0" exitCode=1 Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.265683 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.265773 4792 scope.go:117] "RemoveContainer" containerID="97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.266878 4792 scope.go:117] "RemoveContainer" containerID="3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0" Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.267180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.286669 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.309145 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fcc7fe0546e4b2889f54c8a62fc9fe0ca76226cdd07487db662034d2c7a760\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:22Z\\\",\\\"message\\\":\\\"es/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 21:38:22.341290 6123 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 21:38:22.341312 6123 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 21:38:22.341330 6123 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 21:38:22.341337 6123 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 21:38:22.341372 6123 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 21:38:22.341385 6123 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 21:38:22.341411 6123 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 21:38:22.341429 6123 factory.go:656] Stopping watch factory\\\\nI0216 21:38:22.341431 6123 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 21:38:22.341437 6123 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 21:38:22.341439 6123 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 21:38:22.341453 6123 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 21:38:22.341444 6123 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 21:38:22.341461 6123 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 21:38:22.341470 6123 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.324270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.324323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.324353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.324379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.324397 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.331267 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.347555 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.368582 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.390537 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.406462 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.421431 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.427468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.427523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.427537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.427558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.427573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.438517 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.452991 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.463727 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.478839 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.491098 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.501549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.501765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.501851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.501931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.502002 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.507114 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.515208 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.519559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.519691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.519763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.519830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.519909 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.532193 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.536872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.536921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.536937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.536959 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.536975 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.552511 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.555979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.556004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.556012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.556026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.556035 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.567404 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.570652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.570683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.570692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.570704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.570713 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.580217 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:24 crc kubenswrapper[4792]: E0216 21:38:24.580375 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.581827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.581858 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.581870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.581886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.581897 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.684221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.684362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.684393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.684425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.684448 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.787145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.787194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.787205 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.787221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.787234 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.889522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.889574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.889586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.889619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.889632 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.969155 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:22:46.540782354 +0000 UTC Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.992191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.992244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.992254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.992267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:24 crc kubenswrapper[4792]: I0216 21:38:24.992275 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:24Z","lastTransitionTime":"2026-02-16T21:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.025661 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:25 crc kubenswrapper[4792]: E0216 21:38:25.025782 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.094670 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.094715 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.094725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.094741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.094752 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.196832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.196867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.196879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.196897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.196910 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.269157 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/1.log" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.272588 4792 scope.go:117] "RemoveContainer" containerID="3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0" Feb 16 21:38:25 crc kubenswrapper[4792]: E0216 21:38:25.272744 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.285105 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.298942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.299150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.299261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.299356 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.299441 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.304109 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.325252 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.353965 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.367853 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.384060 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.396677 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.401570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.401604 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.401624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.401639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.401649 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.407135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.417519 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.428401 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.446520 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.461242 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.472777 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.485043 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.503693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.503880 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.503943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.503999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.504052 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.510581 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz"] Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.511042 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.512526 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.513388 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.528867 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.540545 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.550061 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.562195 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.578663 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.589071 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.598225 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.607248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.607307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.607330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.607360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.607382 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.614585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.631760 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.647761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.647940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.648022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3771a924-fabc-44f7-a2c8-8484df9700c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.648137 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwd47\" (UniqueName: \"kubernetes.io/projected/3771a924-fabc-44f7-a2c8-8484df9700c8-kube-api-access-bwd47\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.650282 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.670977 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.690550 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.710018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.710183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.710320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.710447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.710588 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.711938 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.729206 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.749502 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.749641 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.749685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3771a924-fabc-44f7-a2c8-8484df9700c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.749744 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwd47\" (UniqueName: \"kubernetes.io/projected/3771a924-fabc-44f7-a2c8-8484df9700c8-kube-api-access-bwd47\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.750591 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.751042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3771a924-fabc-44f7-a2c8-8484df9700c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.752097 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.755525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3771a924-fabc-44f7-a2c8-8484df9700c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.779268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwd47\" (UniqueName: \"kubernetes.io/projected/3771a924-fabc-44f7-a2c8-8484df9700c8-kube-api-access-bwd47\") pod \"ovnkube-control-plane-749d76644c-tv2mz\" (UID: \"3771a924-fabc-44f7-a2c8-8484df9700c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.813080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.813122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.813143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.813163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.813176 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.822652 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" Feb 16 21:38:25 crc kubenswrapper[4792]: W0216 21:38:25.841222 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3771a924_fabc_44f7_a2c8_8484df9700c8.slice/crio-868205706c613ea826b47e08c71cbf16856e0e714cd53630ad93a05eaf05ed31 WatchSource:0}: Error finding container 868205706c613ea826b47e08c71cbf16856e0e714cd53630ad93a05eaf05ed31: Status 404 returned error can't find the container with id 868205706c613ea826b47e08c71cbf16856e0e714cd53630ad93a05eaf05ed31 Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.915939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.915978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.915991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.916008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.916018 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:25Z","lastTransitionTime":"2026-02-16T21:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:25 crc kubenswrapper[4792]: I0216 21:38:25.970223 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:18:40.265825463 +0000 UTC Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.017621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.017657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.017666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.017679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.017688 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.025642 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:26 crc kubenswrapper[4792]: E0216 21:38:26.025747 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.025827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:26 crc kubenswrapper[4792]: E0216 21:38:26.025951 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.120341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.120379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.120389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.120404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.120419 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.222684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.222717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.222725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.222737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.222746 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.276696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" event={"ID":"3771a924-fabc-44f7-a2c8-8484df9700c8","Type":"ContainerStarted","Data":"5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.276741 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" event={"ID":"3771a924-fabc-44f7-a2c8-8484df9700c8","Type":"ContainerStarted","Data":"890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.276755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" event={"ID":"3771a924-fabc-44f7-a2c8-8484df9700c8","Type":"ContainerStarted","Data":"868205706c613ea826b47e08c71cbf16856e0e714cd53630ad93a05eaf05ed31"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.289196 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.302544 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.313709 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.324872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.324909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.324918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.324933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.324944 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.325305 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.338961 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.350127 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.361630 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.375598 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.393036 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.410207 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427534 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.427529 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.443411 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.457250 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.470815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.484895 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.530051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.530119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.530132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.530151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.530166 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.633729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.633775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.633787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.633803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.633814 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.642366 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sxb4b"] Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.642852 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: E0216 21:38:26.642918 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.654473 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.670647 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.696549 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.715374 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.725738 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.736304 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.736363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.736380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.736403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.736420 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.742731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.759330 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.759547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.759577 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvc86\" (UniqueName: \"kubernetes.io/projected/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-kube-api-access-vvc86\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.771438 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.782551 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.792891 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.802042 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.816527 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.827733 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839246 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.839369 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.849717 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.860150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.860329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvc86\" (UniqueName: \"kubernetes.io/projected/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-kube-api-access-vvc86\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: E0216 21:38:26.860338 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:26 crc kubenswrapper[4792]: E0216 21:38:26.860516 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:27.360501414 +0000 UTC m=+40.013780305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.862633 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.879673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvc86\" (UniqueName: \"kubernetes.io/projected/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-kube-api-access-vvc86\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.942201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.942227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.942234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.942247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.942255 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:26Z","lastTransitionTime":"2026-02-16T21:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:26 crc kubenswrapper[4792]: I0216 21:38:26.970633 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:50:39.013054001 +0000 UTC Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.025878 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:27 crc kubenswrapper[4792]: E0216 21:38:27.026340 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.026900 4792 scope.go:117] "RemoveContainer" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.044895 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.044939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.044951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.044970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.044982 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.147799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.147851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.147865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.147883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.147896 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.250696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.250932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.250941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.250965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.250975 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.282877 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.284805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.285108 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.300150 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.312662 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.323764 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.340720 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.353095 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.353333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.353421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.353508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.353567 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.365196 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.365078 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: E0216 21:38:27.365343 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:27 crc kubenswrapper[4792]: E0216 21:38:27.365395 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:28.365380995 +0000 UTC m=+41.018659886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.381977 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.393571 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.405962 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.418019 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.428734 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.439818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.451717 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.455450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.455500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.455512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.455531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.455574 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.466463 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.490522 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.527819 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.542964 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:27Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.557920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.557948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.557956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.557968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.557978 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.660325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.660378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.660393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.660412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.660424 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.764036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.764107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.764125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.764148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.764165 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.867300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.867360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.867379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.867401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.867422 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970123 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970162 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:27Z","lastTransitionTime":"2026-02-16T21:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:27 crc kubenswrapper[4792]: I0216 21:38:27.970837 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:03:24.839206899 +0000 UTC Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.025382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.025445 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:28 crc kubenswrapper[4792]: E0216 21:38:28.025681 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.025701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:28 crc kubenswrapper[4792]: E0216 21:38:28.025811 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:28 crc kubenswrapper[4792]: E0216 21:38:28.025898 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.042467 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.066815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.072579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.072657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.072671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.072699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.072712 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.096097 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.109481 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.120636 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.133770 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.148403 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.164366 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175471 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.175975 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.185135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.197476 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.210651 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.222594 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.233693 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.247068 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.261122 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:28Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.277665 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.277704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.277716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.277732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.277745 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.374239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:28 crc kubenswrapper[4792]: E0216 21:38:28.374492 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:28 crc kubenswrapper[4792]: E0216 21:38:28.374691 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:30.374654482 +0000 UTC m=+43.027933423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.379216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.379276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.379287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.379302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.379310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.481938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.481991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.482001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.482014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.482024 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.584509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.584548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.584557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.584571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.584580 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.686936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.687008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.687025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.687052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.687069 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.789854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.789909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.789922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.789938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.789949 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.892948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.893029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.893046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.893078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.893100 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.971011 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:07:34.413091325 +0000 UTC Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.996295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.996375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.996400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.996429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:28 crc kubenswrapper[4792]: I0216 21:38:28.996451 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:28Z","lastTransitionTime":"2026-02-16T21:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.025285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:29 crc kubenswrapper[4792]: E0216 21:38:29.025543 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.099433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.099485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.099499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.099516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.099528 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.201989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.202054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.202077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.202111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.202135 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.304647 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.305167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.305243 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.305336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.305439 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.408443 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.408486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.408496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.408511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.408523 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.511213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.511504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.511622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.511712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.511801 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.614942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.614990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.615001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.615017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.615028 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.717490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.717870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.718016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.718129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.718228 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.822179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.822535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.822775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.822929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.823063 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.925938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.926201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.926271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.926358 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.926424 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:29Z","lastTransitionTime":"2026-02-16T21:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:29 crc kubenswrapper[4792]: I0216 21:38:29.971919 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:05:24.180059545 +0000 UTC Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.025284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:30 crc kubenswrapper[4792]: E0216 21:38:30.025396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.025451 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.025485 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:30 crc kubenswrapper[4792]: E0216 21:38:30.025729 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:30 crc kubenswrapper[4792]: E0216 21:38:30.025807 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.028721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.028752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.028774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.028788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.028798 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.130904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.130956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.130971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.130991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.131006 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.233395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.233454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.233467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.233485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.233499 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.336447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.336524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.336550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.336578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.336637 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.394942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:30 crc kubenswrapper[4792]: E0216 21:38:30.395073 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:30 crc kubenswrapper[4792]: E0216 21:38:30.395129 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:34.39511232 +0000 UTC m=+47.048391211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.439509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.439557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.439573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.439588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.439617 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.542329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.542381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.542416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.542444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.542465 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.646153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.646236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.646259 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.646291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.646313 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.749046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.749096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.749104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.749116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.749123 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.851396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.851476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.851508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.851536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.851561 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.954695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.954797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.954823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.954853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.954876 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:30Z","lastTransitionTime":"2026-02-16T21:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:30 crc kubenswrapper[4792]: I0216 21:38:30.973468 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:53:07.299150828 +0000 UTC Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.025679 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:31 crc kubenswrapper[4792]: E0216 21:38:31.025884 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.057421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.057468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.057479 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.057495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.057511 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.160438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.160505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.160522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.160548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.160567 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.263833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.263867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.263875 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.263887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.263897 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.367303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.367388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.367412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.367441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.367461 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.470257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.470333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.470368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.470398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.470420 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.572346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.572380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.572389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.572402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.572411 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.676138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.676213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.676234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.676263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.676285 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.778652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.778723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.778746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.778774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.778796 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.880896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.881000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.881024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.881055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.881076 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.973974 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:14:30.85446195 +0000 UTC Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.984839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.984926 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.984957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.984988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:31 crc kubenswrapper[4792]: I0216 21:38:31.985012 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:31Z","lastTransitionTime":"2026-02-16T21:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.026135 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.026209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:32 crc kubenswrapper[4792]: E0216 21:38:32.026345 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.026380 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:32 crc kubenswrapper[4792]: E0216 21:38:32.026545 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:32 crc kubenswrapper[4792]: E0216 21:38:32.026741 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.088729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.088809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.088834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.088864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.088887 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.191101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.191161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.191177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.191196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.191210 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.293431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.293480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.293496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.293516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.293530 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.395760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.395859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.395886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.395917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.395942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.499209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.499949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.499985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.500006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.500024 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.603347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.603391 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.603402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.603500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.603514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.706250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.706302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.706316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.706338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.706355 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.809276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.809340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.809357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.809382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.809399 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.912206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.912249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.912257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.912271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.912280 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:32Z","lastTransitionTime":"2026-02-16T21:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:32 crc kubenswrapper[4792]: I0216 21:38:32.974795 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:38:01.691654002 +0000 UTC Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.016004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.016058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.016069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.016087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.016099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.025334 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:33 crc kubenswrapper[4792]: E0216 21:38:33.025448 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.118646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.118698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.118710 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.118725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.118734 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.221631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.221683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.221697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.221714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.221726 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.324519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.324564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.324574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.324591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.324636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.426893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.426939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.426956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.426973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.426984 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.529383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.529449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.529472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.529513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.529534 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.631942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.632010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.632028 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.632051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.632071 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.734047 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.734094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.734105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.734119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.734132 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.836412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.836456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.836470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.836485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.836496 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.938663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.938717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.938732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.938755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.938769 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:33Z","lastTransitionTime":"2026-02-16T21:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:33 crc kubenswrapper[4792]: I0216 21:38:33.975381 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:18:12.652975234 +0000 UTC Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.026039 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.026074 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.026177 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.026264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.026364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.026456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.040282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.040312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.040323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.040360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.040369 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.143109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.143521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.143768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.143967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.144135 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.247202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.247258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.247274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.247297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.247315 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.350407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.350475 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.350488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.350503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.350516 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.440328 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.440473 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.440519 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:42.440506149 +0000 UTC m=+55.093785040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.453008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.453050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.453059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.453072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.453081 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.555586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.555640 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.555649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.555662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.555671 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.659127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.659194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.659212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.659234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.659251 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.761971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.762017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.762027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.762046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.762057 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.864857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.864897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.864907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.864922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.864935 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.902517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.902553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.902561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.902577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.902586 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.916314 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:34Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.919403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.919445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.919455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.919470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.919481 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.933918 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:34Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.938319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.938374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.938392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.938417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.938440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.952188 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:34Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.961231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.961280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.961291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.961308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.961320 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.976502 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:49:15.5903121 +0000 UTC Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.978345 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:34Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.981752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.981796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.981810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.981828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.981843 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.996286 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:34Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:34 crc kubenswrapper[4792]: E0216 21:38:34.996431 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.998083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.998118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.998131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.998146 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:34 crc kubenswrapper[4792]: I0216 21:38:34.998157 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:34Z","lastTransitionTime":"2026-02-16T21:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.025960 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:35 crc kubenswrapper[4792]: E0216 21:38:35.026166 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.100794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.100863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.100888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.100918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.100942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.204003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.204061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.204077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.204099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.204118 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.306528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.306572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.306618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.306634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.306646 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.409462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.409526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.409542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.409563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.409582 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.511661 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.511713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.511725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.511742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.511756 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.615441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.615511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.615524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.615543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.615555 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.718495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.718559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.718582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.718626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.718651 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.821423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.821462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.821471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.821488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.821497 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.925009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.925091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.925115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.925143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.925165 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:35Z","lastTransitionTime":"2026-02-16T21:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:35 crc kubenswrapper[4792]: I0216 21:38:35.977343 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:30:23.746340419 +0000 UTC Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.026326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.026398 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:36 crc kubenswrapper[4792]: E0216 21:38:36.026476 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:36 crc kubenswrapper[4792]: E0216 21:38:36.026700 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.026756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:36 crc kubenswrapper[4792]: E0216 21:38:36.026865 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.028610 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.028638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.028646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.028681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.028691 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.131504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.131634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.131663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.131691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.131712 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.234893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.234942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.234952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.234981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.234989 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.337699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.337752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.337774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.337793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.337811 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.440568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.440641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.440652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.440671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.440684 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.543514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.543587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.543615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.543634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.543646 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.646291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.646366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.646433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.646468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.646492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.749745 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.749818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.749851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.749881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.749901 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.852807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.852872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.852889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.852911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.852928 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.955492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.955550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.955571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.955590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.955631 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:36Z","lastTransitionTime":"2026-02-16T21:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:36 crc kubenswrapper[4792]: I0216 21:38:36.978330 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:10:25.119541184 +0000 UTC Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.025703 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:37 crc kubenswrapper[4792]: E0216 21:38:37.025880 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.058742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.058825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.058850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.058888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.058912 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.161877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.161938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.161946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.161961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.161970 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.265059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.265223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.265248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.265298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.265319 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.368526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.368577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.368631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.368653 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.368668 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.471185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.471248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.471257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.471271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.471280 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.574686 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.574761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.574786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.574814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.574840 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.678284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.678362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.678397 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.678425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.678445 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.781942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.782027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.782055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.782086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.782108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.885638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.885706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.885731 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.885763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.885789 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.979506 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:09:12.825647372 +0000 UTC Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.988470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.988540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.988559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.988585 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.988632 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:37Z","lastTransitionTime":"2026-02-16T21:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.991871 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:37 crc kubenswrapper[4792]: I0216 21:38:37.993502 4792 scope.go:117] "RemoveContainer" containerID="3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.025332 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.025372 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.025492 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:38 crc kubenswrapper[4792]: E0216 21:38:38.025640 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:38 crc kubenswrapper[4792]: E0216 21:38:38.025841 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:38 crc kubenswrapper[4792]: E0216 21:38:38.026020 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.045644 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.061364 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.080634 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.091749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.091784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.091796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.091812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.091823 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.093617 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.107713 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.123363 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.137360 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.159565 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.176877 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193583 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.193970 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.204807 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.214835 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.229862 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.248731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.262803 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.274403 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.296665 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.296721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.296733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.296751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.296764 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.323066 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/1.log" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.325748 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.326177 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.336220 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.347949 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.371910 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.392690 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.398482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.398517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.398527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.398542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.398551 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.406018 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.422421 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.431630 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.440058 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.452389 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.463210 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.474732 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.487023 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.498586 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.500681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.500705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.500714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.500733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.500742 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.513617 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.526687 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.542218 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:38Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.602484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.602525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.602537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.602553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.602566 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.704966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.705025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.705034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.705049 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.705057 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.807315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.807364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.807380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.807404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.807421 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.911016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.911062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.911072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.911091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.911103 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:38Z","lastTransitionTime":"2026-02-16T21:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:38 crc kubenswrapper[4792]: I0216 21:38:38.980662 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:50:46.927714528 +0000 UTC Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.014303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.014343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.014353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.014365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.014373 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.025815 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.026041 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.116324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.116376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.116386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.116401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.116414 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.219056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.219142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.219160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.219183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.219201 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.321767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.321837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.321850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.321867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.321878 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.331814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/2.log" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.332989 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/1.log" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.336284 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" exitCode=1 Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.336326 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.336361 4792 scope.go:117] "RemoveContainer" containerID="3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.337282 4792 scope.go:117] "RemoveContainer" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.337500 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.350690 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.369144 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.384731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.408497 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.425239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.425312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.425326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.425353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.425371 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.434799 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.450182 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.463270 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.476958 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.488899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.499639 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.510525 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.521966 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.527468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.527509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.527523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.527540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.527551 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.536103 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.547098 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.561127 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.572979 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.581785 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.594334 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.605573 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.617933 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.628507 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.629556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.629636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.629650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.629666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.629677 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.639356 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.651911 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.661930 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.677026 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.695090 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dc59018ecfb30676b60bac204c3b394f71361ff096c261699dbd68e87fc89f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:24Z\\\",\\\"message\\\":\\\"etwork policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:24Z is after 2025-08-24T17:21:41Z]\\\\nI0216 21:38:24.044915 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0216 21:38:24.044922 6257 services_controller.go:356] Processing sync for service openshift-authentication-operator/metrics for network=default\\\\nI0216 21:38:24.044925 6257 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 21:38:24.044930 6257 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 21:38:24.044922 6257 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.706418 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.717102 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.730254 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.732868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.732905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.732915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.732930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.732942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.741473 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.752828 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.762623 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.771791 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:39Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.788260 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.788378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.788412 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:39:11.78839023 +0000 UTC m=+84.441669161 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.788494 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.788572 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:39:11.788553844 +0000 UTC m=+84.441832795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.835245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.835293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.835302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.835316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.835325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.889535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.889848 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.890200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890237 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890262 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.890268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890334 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:39:11.890311344 +0000 UTC m=+84.543590275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890468 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890503 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890526 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890535 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890647 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:39:11.890586952 +0000 UTC m=+84.543865883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:38:39 crc kubenswrapper[4792]: E0216 21:38:39.890702 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:39:11.890668124 +0000 UTC m=+84.543947055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.937985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.938038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.938050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.938066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.938079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:39Z","lastTransitionTime":"2026-02-16T21:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:39 crc kubenswrapper[4792]: I0216 21:38:39.980786 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:15:11.474055547 +0000 UTC Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.025677 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.025754 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.025686 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:40 crc kubenswrapper[4792]: E0216 21:38:40.025906 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:40 crc kubenswrapper[4792]: E0216 21:38:40.026040 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:40 crc kubenswrapper[4792]: E0216 21:38:40.026189 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.040423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.040453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.040461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.040474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.040483 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.143016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.143053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.143061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.143075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.143084 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.245995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.246064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.246087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.246114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.246137 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.342541 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/2.log" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.347962 4792 scope.go:117] "RemoveContainer" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.348572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.349059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.349080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.349096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.349108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: E0216 21:38:40.349380 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.383517 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.416645 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.429302 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.440905 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.451083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.451117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.451129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.451145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.451157 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.452443 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.465678 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.479714 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.492771 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.507767 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.523000 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.546074 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.553699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.553752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.553764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.553782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.553796 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.562873 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.574950 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.590023 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.605226 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.621093 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:40Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.656280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.656323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.656333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.656348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.656361 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.760323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.760370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.760383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.760399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.760411 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.862527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.862569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.862629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.862667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.862681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.965156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.965228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.965247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.965265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.965274 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:40Z","lastTransitionTime":"2026-02-16T21:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:40 crc kubenswrapper[4792]: I0216 21:38:40.981893 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:14:04.805962571 +0000 UTC Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.025791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:41 crc kubenswrapper[4792]: E0216 21:38:41.025920 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.067547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.067621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.067631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.067648 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.067657 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.170343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.170426 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.170449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.170478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.170501 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.272861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.272900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.272908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.272922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.272931 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.375625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.375691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.375703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.375722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.375735 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.478690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.478784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.478815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.478836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.478890 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.581775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.581912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.581939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.581967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.582027 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.684467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.684559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.684643 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.684693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.684717 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.787268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.787303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.787311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.787324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.787333 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.889780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.889829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.889843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.889862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.889875 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.982163 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:13:56.241900317 +0000 UTC Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.992569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.992903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.992921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.992947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:41 crc kubenswrapper[4792]: I0216 21:38:41.992960 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:41Z","lastTransitionTime":"2026-02-16T21:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.026084 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.026154 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.026228 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:42 crc kubenswrapper[4792]: E0216 21:38:42.026294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:42 crc kubenswrapper[4792]: E0216 21:38:42.026396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:42 crc kubenswrapper[4792]: E0216 21:38:42.026551 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.095826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.095905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.095932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.095960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.095979 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.197646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.197682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.197692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.197707 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.197719 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.300492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.300537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.300545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.300558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.300568 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.403439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.403503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.403539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.403578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.403636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.506691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.506771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.506796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.506829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.506851 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.515977 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:42 crc kubenswrapper[4792]: E0216 21:38:42.516171 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:42 crc kubenswrapper[4792]: E0216 21:38:42.516284 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:38:58.51626109 +0000 UTC m=+71.169539991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.609746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.609835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.609868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.609897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.609915 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.712473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.712577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.712662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.712706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.712727 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.815888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.815933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.815945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.815960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.815971 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.919412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.919500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.919524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.919553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.919573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:42Z","lastTransitionTime":"2026-02-16T21:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:42 crc kubenswrapper[4792]: I0216 21:38:42.982993 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:46:23.027856303 +0000 UTC Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.022158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.022222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.022242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.022265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.022284 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.025836 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:43 crc kubenswrapper[4792]: E0216 21:38:43.026072 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.125108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.125177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.125191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.125204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.125213 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.228407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.228473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.228498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.228529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.228554 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.331400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.331472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.331495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.331518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.331536 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.434429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.434530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.434549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.434576 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.434615 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.537230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.537298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.537317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.537344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.537363 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.640624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.641027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.641094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.641229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.641300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.744843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.744889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.744904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.744920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.744932 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.851932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.852025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.852045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.852070 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.852089 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.955315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.955398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.955421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.955455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.955475 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:43Z","lastTransitionTime":"2026-02-16T21:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:43 crc kubenswrapper[4792]: I0216 21:38:43.983682 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:38:54.653544856 +0000 UTC Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.025208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.025306 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.025231 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:44 crc kubenswrapper[4792]: E0216 21:38:44.025456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:44 crc kubenswrapper[4792]: E0216 21:38:44.025554 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:44 crc kubenswrapper[4792]: E0216 21:38:44.025677 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.058175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.058229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.058247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.058269 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.058287 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.161156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.161210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.161227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.161246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.161259 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.263607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.263656 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.263666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.263681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.263692 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.365378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.365430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.365447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.365468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.365483 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.468546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.468666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.468692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.468743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.468766 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.571271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.571338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.571355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.571380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.571397 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.674265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.674328 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.674344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.674378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.674401 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.777045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.777140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.777157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.777221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.777239 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.880747 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.880805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.880823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.880846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.880863 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.885054 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.898848 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.907388 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.927656 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.951961 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.969991 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.982359 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983804 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:44Z","lastTransitionTime":"2026-02-16T21:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.983855 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:44:00.18791188 +0000 UTC Feb 16 21:38:44 crc kubenswrapper[4792]: I0216 21:38:44.995436 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:44Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.011694 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.024160 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.025395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.025552 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.038725 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.051762 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.061743 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.073683 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.085642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.085675 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.085684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.085697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.085707 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.086439 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.097731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.108875 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.120946 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.188321 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.188359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.188369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.188384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.188394 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.290858 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.290963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.291045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.291085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.291112 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.340226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.340277 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.340285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.340299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.340310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.359175 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.365111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.365172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.365185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.365202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.365214 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.379318 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.383058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.383222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.383237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.383253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.383264 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.395778 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.399571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.399616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.399625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.399639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.399651 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.410212 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.414006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.414055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.414067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.414084 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.414096 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.425963 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:45Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:45 crc kubenswrapper[4792]: E0216 21:38:45.426099 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.427618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.427655 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.427667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.427685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.427697 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.530264 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.530289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.530297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.530309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.530318 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.633056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.633131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.633155 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.633183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.633219 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.736029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.736093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.736111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.736136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.736153 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.838719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.838780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.838802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.838830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.838852 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.942947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.943027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.943043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.943067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.943083 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:45Z","lastTransitionTime":"2026-02-16T21:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:45 crc kubenswrapper[4792]: I0216 21:38:45.984627 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:19:23.727611353 +0000 UTC Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.026070 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.026147 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:46 crc kubenswrapper[4792]: E0216 21:38:46.026211 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:46 crc kubenswrapper[4792]: E0216 21:38:46.026303 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.026478 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:46 crc kubenswrapper[4792]: E0216 21:38:46.026547 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.045340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.045381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.045392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.045406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.045417 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.149205 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.149281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.149299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.149327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.149346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.252564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.252653 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.252664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.252677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.252685 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.356890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.356956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.356975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.357004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.357025 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.459626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.459658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.459666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.459678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.459687 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.563150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.563214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.563231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.563256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.563273 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.666008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.666099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.666130 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.666160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.666182 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.769144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.769209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.769227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.769255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.769273 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.872338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.872403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.872424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.872452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.872475 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.975379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.975461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.975481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.975503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.975520 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:46Z","lastTransitionTime":"2026-02-16T21:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:46 crc kubenswrapper[4792]: I0216 21:38:46.984835 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:11:15.937623273 +0000 UTC Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.025185 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:47 crc kubenswrapper[4792]: E0216 21:38:47.025321 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.077806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.077839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.077848 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.077861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.077870 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.181751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.181851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.181869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.181893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.181911 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.285103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.285147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.285162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.285190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.285212 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.388296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.388360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.388378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.388401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.388419 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.491525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.491568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.491576 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.491590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.491617 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.594020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.594064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.594073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.594089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.594098 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.696487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.696551 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.696569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.696640 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.696660 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.800037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.800092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.800147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.800172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.800190 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.903657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.903732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.903755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.903783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.903805 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:47Z","lastTransitionTime":"2026-02-16T21:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:47 crc kubenswrapper[4792]: I0216 21:38:47.984995 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:55:46.956477779 +0000 UTC Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.005712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.005776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.005805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.005837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.005859 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.025134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.025180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.025257 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:48 crc kubenswrapper[4792]: E0216 21:38:48.025370 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:48 crc kubenswrapper[4792]: E0216 21:38:48.025534 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:48 crc kubenswrapper[4792]: E0216 21:38:48.025679 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.045943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.058912 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.075736 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.095298 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.107923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.107963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.107977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.107998 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.108013 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.116196 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.131839 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.143985 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.159714 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.170966 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.181664 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.195242 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.206640 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.210331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.210367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.210375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.210387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.210397 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.219222 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.230118 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.247781 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.261796 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.273944 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:48Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.312247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.312293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.312303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.312318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.312326 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.415368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.415407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.415418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.415431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.415440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.517979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.518018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.518026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.518039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.518049 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.620922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.620961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.620997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.621013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.621025 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.723861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.723912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.723929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.723952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.723970 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.826067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.826121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.826135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.826157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.826172 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.929072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.929129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.929145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.929165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.929181 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:48Z","lastTransitionTime":"2026-02-16T21:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:48 crc kubenswrapper[4792]: I0216 21:38:48.985631 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:03:38.94378556 +0000 UTC Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.025985 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:49 crc kubenswrapper[4792]: E0216 21:38:49.026136 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.031676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.031712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.031722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.031739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.031751 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.134868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.135244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.135404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.135554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.135736 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.238877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.238916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.238943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.238966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.238978 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.344089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.344199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.344222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.344261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.344281 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.447684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.448041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.448186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.448321 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.448450 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.551455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.551541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.551560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.551586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.551636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.654852 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.654910 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.654945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.654977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.654999 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.757860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.757911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.757923 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.757939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.757953 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.859971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.860032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.860048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.860068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.860083 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.962975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.963206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.963284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.963370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.963432 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:49Z","lastTransitionTime":"2026-02-16T21:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:49 crc kubenswrapper[4792]: I0216 21:38:49.986826 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:58:19.290690017 +0000 UTC Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.025517 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.025925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.026062 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:50 crc kubenswrapper[4792]: E0216 21:38:50.025954 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:50 crc kubenswrapper[4792]: E0216 21:38:50.026342 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:50 crc kubenswrapper[4792]: E0216 21:38:50.026461 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.066058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.066105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.066118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.066135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.066147 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.168812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.168881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.168900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.168921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.168938 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.271987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.272068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.272092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.272120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.272142 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.376442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.376659 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.376678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.377115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.377146 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.480331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.480381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.480393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.480427 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.480439 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.582933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.582974 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.582985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.583001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.583013 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.687033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.687092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.687109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.687131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.687148 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.790208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.790258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.790266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.790280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.790290 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.892173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.892210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.892218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.892232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.892241 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.987630 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:12:38.157074306 +0000 UTC Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.994390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.994423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.994431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.994460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:50 crc kubenswrapper[4792]: I0216 21:38:50.994471 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:50Z","lastTransitionTime":"2026-02-16T21:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.026020 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:51 crc kubenswrapper[4792]: E0216 21:38:51.026379 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.026588 4792 scope.go:117] "RemoveContainer" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" Feb 16 21:38:51 crc kubenswrapper[4792]: E0216 21:38:51.026836 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.096295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.096344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.096352 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.096367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.096376 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.199300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.199351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.199367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.199394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.199411 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.302761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.302846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.302873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.302902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.302923 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.406403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.406528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.406550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.406577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.406636 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.509107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.509190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.509212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.509236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.509253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.612556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.612647 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.612662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.612685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.612728 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.719296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.719425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.719458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.719499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.719541 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.824655 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.824688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.824697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.824714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.824725 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.928679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.928746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.928768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.928796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.928822 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:51Z","lastTransitionTime":"2026-02-16T21:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:51 crc kubenswrapper[4792]: I0216 21:38:51.988719 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:03:49.911313934 +0000 UTC Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.029926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:52 crc kubenswrapper[4792]: E0216 21:38:52.030100 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.030748 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:52 crc kubenswrapper[4792]: E0216 21:38:52.030868 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.030944 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:52 crc kubenswrapper[4792]: E0216 21:38:52.031023 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.035343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.035434 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.035454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.035528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.035548 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.139874 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.139986 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.140011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.140042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.140067 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.242996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.243120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.243165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.243208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.243236 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.345829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.345870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.345880 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.345894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.345904 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.448373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.448409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.448417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.448428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.448437 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.551019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.551085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.551102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.551125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.551143 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.654141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.654193 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.654206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.654227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.654240 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.756771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.756861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.756889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.756919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.756942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.859638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.859691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.859702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.859720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.859730 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.962253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.962305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.962318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.962338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.962350 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:52Z","lastTransitionTime":"2026-02-16T21:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:52 crc kubenswrapper[4792]: I0216 21:38:52.989783 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:11:42.917650648 +0000 UTC Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.025491 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:53 crc kubenswrapper[4792]: E0216 21:38:53.025706 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.064304 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.064362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.064370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.064385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.064395 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.166768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.166812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.166822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.166836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.166847 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.269671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.269728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.269743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.269765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.269781 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.372085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.372128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.372137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.372151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.372160 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.474540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.474574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.474583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.474617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.474627 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.577097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.577134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.577146 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.577162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.577174 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.679194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.679258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.679270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.679289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.679349 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.781488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.781542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.781567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.781587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.781656 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.885339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.885376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.885383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.885398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.885407 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.988489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.988557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.988574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.988625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.988647 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:53Z","lastTransitionTime":"2026-02-16T21:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:53 crc kubenswrapper[4792]: I0216 21:38:53.990664 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:33:22.802611625 +0000 UTC Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.026047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:54 crc kubenswrapper[4792]: E0216 21:38:54.026237 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.026450 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:54 crc kubenswrapper[4792]: E0216 21:38:54.026648 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.026463 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:54 crc kubenswrapper[4792]: E0216 21:38:54.027185 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.091042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.091078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.091092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.091109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.091120 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.194489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.194557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.194579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.194641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.194660 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.297176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.297237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.297249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.297266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.297276 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.399211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.399243 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.399251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.399262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.399271 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.502637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.502683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.502717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.502735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.502751 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.604739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.604784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.604794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.604811 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.604821 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.707641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.707713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.707730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.707749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.707765 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.810035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.810079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.810092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.810106 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.810116 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.912311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.912362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.912373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.912391 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.912406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:54Z","lastTransitionTime":"2026-02-16T21:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:54 crc kubenswrapper[4792]: I0216 21:38:54.991684 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:48:24.643604519 +0000 UTC Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.014534 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.014586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.014619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.014638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.014653 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.026152 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.026339 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.116807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.116849 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.116861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.116877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.116887 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.219051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.219090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.219099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.219114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.219124 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.322032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.322105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.322131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.322162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.322185 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.424461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.424498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.424506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.424519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.424528 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.472128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.472175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.472193 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.472215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.472235 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.485637 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:55Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.489331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.489436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.489468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.489499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.489522 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.505338 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:55Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.511545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.511623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.511639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.511672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.511684 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.522815 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:55Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.525988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.526028 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.526044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.526064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.526079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.537463 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:55Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.540116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.540145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.540154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.540189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.540215 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.550121 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:55Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:55 crc kubenswrapper[4792]: E0216 21:38:55.550432 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.551859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.551930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.552031 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.552092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.552114 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.653843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.653882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.653892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.653907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.653918 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.756292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.756327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.756336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.756349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.756358 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.858842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.858901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.858912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.858929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.858961 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.960907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.960956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.960969 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.960993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.961004 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:55Z","lastTransitionTime":"2026-02-16T21:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:55 crc kubenswrapper[4792]: I0216 21:38:55.992282 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:56:17.425974899 +0000 UTC Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.026047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.026096 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:56 crc kubenswrapper[4792]: E0216 21:38:56.026184 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.026203 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:56 crc kubenswrapper[4792]: E0216 21:38:56.026257 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:56 crc kubenswrapper[4792]: E0216 21:38:56.026286 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.063649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.063686 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.063698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.063714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.063729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.166112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.166173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.166184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.166197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.166206 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.268327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.268369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.268380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.268396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.268409 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.370957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.371004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.371016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.371034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.371045 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.473460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.473504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.473514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.473529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.473539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.576452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.576486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.576496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.576510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.576518 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.679208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.679244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.679253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.679266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.679276 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.781355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.781401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.781410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.781422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.781431 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.883548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.883589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.883633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.883651 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.883666 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.987023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.987071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.987083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.987101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.987113 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:56Z","lastTransitionTime":"2026-02-16T21:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:56 crc kubenswrapper[4792]: I0216 21:38:56.993191 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:20:14.620829419 +0000 UTC Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.025496 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:57 crc kubenswrapper[4792]: E0216 21:38:57.025724 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.089566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.089646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.089669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.089701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.089722 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.192071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.192105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.192113 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.192126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.192134 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.294318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.294350 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.294385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.294398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.294408 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.397214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.397262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.397320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.397363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.397376 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.499674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.499701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.499711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.499722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.499732 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.602217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.602265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.602276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.602294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.602306 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.704498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.704545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.704556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.704573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.704585 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.806617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.806650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.806660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.806673 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.806682 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.908836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.908883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.908895 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.908916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.908930 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:57Z","lastTransitionTime":"2026-02-16T21:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:57 crc kubenswrapper[4792]: I0216 21:38:57.993959 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:53:29.549001221 +0000 UTC Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.011442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.011472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.011481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.011493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.011502 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.025922 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:38:58 crc kubenswrapper[4792]: E0216 21:38:58.026073 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.026155 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.026182 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:58 crc kubenswrapper[4792]: E0216 21:38:58.026287 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:38:58 crc kubenswrapper[4792]: E0216 21:38:58.026361 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.041095 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.054547 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.068125 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.082320 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.090365 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.099635 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.115739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.115773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.115783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.115796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.115806 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.119030 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.130683 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.146120 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.161194 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.177405 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.188769 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.203755 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.218438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.218514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.218532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.218555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.218573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.229247 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.243391 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.254409 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.267440 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:38:58Z is after 2025-08-24T17:21:41Z" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.321448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.321482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.321493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.321508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.321521 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.424560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.424616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.424626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.424639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.424648 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.528468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.528506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.528517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.528535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.528547 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.577285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:38:58 crc kubenswrapper[4792]: E0216 21:38:58.577537 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:58 crc kubenswrapper[4792]: E0216 21:38:58.577938 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:39:30.577912035 +0000 UTC m=+103.231190956 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.631680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.632056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.632271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.632485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.632710 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.736946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.737009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.737025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.737049 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.737066 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.839911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.839975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.839994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.840018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.840036 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.942674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.942734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.942748 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.942763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.942774 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:58Z","lastTransitionTime":"2026-02-16T21:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:58 crc kubenswrapper[4792]: I0216 21:38:58.994625 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:15:39.203095947 +0000 UTC Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.026114 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:38:59 crc kubenswrapper[4792]: E0216 21:38:59.026243 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.045058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.045099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.045108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.045121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.045133 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.147894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.147963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.147983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.148007 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.148027 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.250070 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.250187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.250207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.250232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.250249 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.352341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.352385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.352404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.352428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.352444 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.455346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.455381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.455392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.455407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.455420 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.558187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.558276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.558294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.558316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.558332 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.661098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.661149 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.661159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.661179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.661193 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.764471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.764536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.764546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.764560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.764569 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.866971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.867045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.867057 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.867072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.867083 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.969051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.969083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.969094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.969109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.969120 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:38:59Z","lastTransitionTime":"2026-02-16T21:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:38:59 crc kubenswrapper[4792]: I0216 21:38:59.995790 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:40:19.382774045 +0000 UTC Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.026045 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.026460 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:00 crc kubenswrapper[4792]: E0216 21:39:00.026743 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.026804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:00 crc kubenswrapper[4792]: E0216 21:39:00.027185 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:00 crc kubenswrapper[4792]: E0216 21:39:00.027251 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.071209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.071251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.071260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.071274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.071283 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.173338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.173378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.173389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.173404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.173415 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.275995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.276055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.276072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.276095 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.276113 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.378444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.378497 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.378531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.378555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.378573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.407812 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/0.log" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.407879 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" containerID="14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2" exitCode=1 Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.407916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerDied","Data":"14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.408390 4792 scope.go:117] "RemoveContainer" containerID="14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.429634 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.449530 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.462553 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.478957 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.480265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.480310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.480320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.480336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.480346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.497447 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.510081 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.523586 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.534931 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.544219 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.553766 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.563013 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.572562 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.582893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.582938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.582950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.582967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.582981 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.587771 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.602009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.615968 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.627781 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.640962 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:00Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.686143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.686178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.686186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.686201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.686209 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.788268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.788311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.788321 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.788335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.788346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.890796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.890877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.890888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.890902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.890911 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.993332 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.993371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.993379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.993395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.993406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:00Z","lastTransitionTime":"2026-02-16T21:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:00 crc kubenswrapper[4792]: I0216 21:39:00.996402 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:07:22.110138174 +0000 UTC Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.025936 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:01 crc kubenswrapper[4792]: E0216 21:39:01.026067 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.096195 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.096237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.096276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.096292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.096324 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.198644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.198697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.198714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.200867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.200919 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.303536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.303591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.303639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.303660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.303675 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.406027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.406055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.406065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.406079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.406090 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.413579 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/0.log" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.413717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerStarted","Data":"363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.426046 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.439435 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.456887 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.468010 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.476996 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.488904 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.499466 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.511729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.511783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.511798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.511816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.511826 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.514720 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.525524 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.535140 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.543920 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.558734 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.576060 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.588447 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.600454 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.613944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.614148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.614238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.614326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.614406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.617387 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.631333 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:01Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.716483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.716524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.716535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.716550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.716559 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.818714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.818746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.818754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.818767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.818776 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.921404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.921476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.921487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.921503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.921515 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:01Z","lastTransitionTime":"2026-02-16T21:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:01 crc kubenswrapper[4792]: I0216 21:39:01.997417 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:32:57.426499311 +0000 UTC Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.023794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.023879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.023902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.023937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.023959 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.026761 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.026815 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.026761 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:02 crc kubenswrapper[4792]: E0216 21:39:02.026874 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:02 crc kubenswrapper[4792]: E0216 21:39:02.026974 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:02 crc kubenswrapper[4792]: E0216 21:39:02.027037 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.127045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.127102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.127118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.127142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.127204 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.229763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.229803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.229815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.229830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.229843 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.331982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.332033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.332046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.332065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.332075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.434357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.434422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.434436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.434451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.434460 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.536053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.536096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.536108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.536126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.536138 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.638261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.638293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.638301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.638314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.638323 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.740710 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.740769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.740777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.740791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.740800 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.843064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.843101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.843112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.843128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.843139 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.945535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.945581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.945627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.945652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.945661 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:02Z","lastTransitionTime":"2026-02-16T21:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:02 crc kubenswrapper[4792]: I0216 21:39:02.998333 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:32:36.824806131 +0000 UTC Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.025761 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:03 crc kubenswrapper[4792]: E0216 21:39:03.025953 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.048055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.048105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.048118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.048136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.048155 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.150891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.150949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.150965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.150987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.151003 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.253332 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.253378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.253386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.253400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.253408 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.356518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.356568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.356579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.356612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.356625 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.458892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.458959 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.458975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.458997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.459011 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.561591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.561653 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.561664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.561680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.561691 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.664342 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.664395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.664409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.664428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.664449 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.767538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.767625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.767642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.767661 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.767675 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.870549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.870640 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.870657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.870680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.870697 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.974122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.974214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.974226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.974238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.974246 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:03Z","lastTransitionTime":"2026-02-16T21:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:03 crc kubenswrapper[4792]: I0216 21:39:03.999064 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:44:11.614479672 +0000 UTC Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.027320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:04 crc kubenswrapper[4792]: E0216 21:39:04.027436 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.027633 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:04 crc kubenswrapper[4792]: E0216 21:39:04.027701 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.027831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:04 crc kubenswrapper[4792]: E0216 21:39:04.027901 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.076946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.077035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.077058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.077090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.077113 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.180917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.180982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.180994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.181012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.181026 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.283914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.283964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.283978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.283996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.284008 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.386992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.387065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.387083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.387107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.387125 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.489971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.490013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.490056 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.490073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.490085 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.593161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.593242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.593271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.593303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.593328 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.695684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.695735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.695747 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.695767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.695780 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.798033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.798080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.798091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.798109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.798123 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.900547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.900635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.900654 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.900677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.900697 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:04Z","lastTransitionTime":"2026-02-16T21:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:04 crc kubenswrapper[4792]: I0216 21:39:04.999384 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:43:45.342516742 +0000 UTC Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.003237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.003287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.003307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.003334 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.003351 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.025673 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.025833 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.027022 4792 scope.go:117] "RemoveContainer" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.105364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.105400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.105411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.105429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.105452 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.209197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.209253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.209267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.209287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.209300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.311719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.311772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.311785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.311812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.311827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.414722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.414761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.414772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.414787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.414798 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.427184 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/2.log" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.430547 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.431077 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.454076 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.474068 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.495712 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.511277 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.516814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.516856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.516868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.516885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.516897 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.523434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.543675 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.572498 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.583192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.583239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.583254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.583279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.583296 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.593177 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.599851 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.604869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.604903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.604912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.604925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.604934 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.605871 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.616418 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.620894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.621025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.621107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.621190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.621287 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.624405 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.634537 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.637837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.637875 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.637886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.637902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.637915 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.639444 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.652582 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.654402 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.656200 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.656245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.656261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.656282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.656298 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.669696 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.671761 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: E0216 21:39:05.671865 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.673431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.673483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.673493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.673507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.673516 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.701591 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.723529 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.738680 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.749678 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:05Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.775330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.775370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.775379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.775392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.775402 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.877528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.877581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.877616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.877633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.877645 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.980002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.980057 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.980074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.980098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:05 crc kubenswrapper[4792]: I0216 21:39:05.980116 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:05Z","lastTransitionTime":"2026-02-16T21:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.000646 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:35:34.499944839 +0000 UTC Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.025866 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.025913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:06 crc kubenswrapper[4792]: E0216 21:39:06.026000 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.025866 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:06 crc kubenswrapper[4792]: E0216 21:39:06.026133 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:06 crc kubenswrapper[4792]: E0216 21:39:06.026219 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.083656 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.083704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.083716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.083734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.083748 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.186558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.186677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.186701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.186729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.186751 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.290431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.290516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.290538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.290565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.290584 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.394013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.394072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.394081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.394096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.394106 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.436247 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/3.log" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.437261 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/2.log" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.440439 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" exitCode=1 Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.440488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.440550 4792 scope.go:117] "RemoveContainer" containerID="1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.441756 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:06 crc kubenswrapper[4792]: E0216 21:39:06.442199 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.463263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.482685 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.497065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.497142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.497164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.497192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.497213 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.510076 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.537998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a929200407e54a365f92812c1dd44294455435e52b80010b4bd3291bfd9f1a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:38Z\\\",\\\"message\\\":\\\"0920 6490 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740924 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 21:38:38.740932 6490 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740942 6490 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0216 21:38:38.740948 6490 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0216 21:38:38.740953 6490 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 21:38:38.740954 6490 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0216 21:38:38.740914 6490 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz\\\\nF0216 21:38:38.740921 6490 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:39:06Z\\\",\\\"message\\\":\\\"5.997165 6887 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 21:39:05.997200 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mp8ql\\\\nI0216 21:39:05.997207 6887 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-szmc4 in node crc\\\\nI0216 21:39:05.997208 6887 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz in node crc\\\\nI0216 21:39:05.997217 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-sxb4b\\\\nI0216 21:39:05.997224 6887 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-szmc4 after 0 failed attempt(s)\\\\nI0216 21:39:05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.555230 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.572337 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.591727 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.600029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.600102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.600115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.600136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.600153 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.610004 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.627223 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.642772 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.661023 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.676836 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.692137 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.702981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.703061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.703086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.703116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.703145 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.714056 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.731354 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.751855 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.772945 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:06Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.806418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.806476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.806498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.806526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.806549 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.909765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.909834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.909844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.909860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:06 crc kubenswrapper[4792]: I0216 21:39:06.909871 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:06Z","lastTransitionTime":"2026-02-16T21:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.001568 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:21:49.861238112 +0000 UTC Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.012900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.012959 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.012977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.013000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.013017 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.025560 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:07 crc kubenswrapper[4792]: E0216 21:39:07.025834 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.116522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.116587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.116634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.116660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.116677 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.219531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.219587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.219624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.219646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.219661 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.322794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.322863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.322888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.322920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.322942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.425203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.425244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.425256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.425270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.425281 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.446363 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/3.log" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.450873 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:07 crc kubenswrapper[4792]: E0216 21:39:07.451097 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.463642 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.481104 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.491007 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.505440 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.523998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:39:06Z\\\",\\\"message\\\":\\\"5.997165 6887 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 21:39:05.997200 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mp8ql\\\\nI0216 21:39:05.997207 6887 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-szmc4 in node crc\\\\nI0216 21:39:05.997208 6887 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz in node crc\\\\nI0216 21:39:05.997217 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-sxb4b\\\\nI0216 21:39:05.997224 6887 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-szmc4 after 0 failed attempt(s)\\\\nI0216 21:39:05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:39:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.527812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.527859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.527870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.527888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.527900 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.534315 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.545408 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.561559 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.574711 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.589436 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.600682 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.613898 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630178 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.630751 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.643460 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.659324 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.670998 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.687817 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:07Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.733345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.733393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.733402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.733422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.733434 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.836774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.836834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.836854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.836877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.836895 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.939883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.939915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.939924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.939936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:07 crc kubenswrapper[4792]: I0216 21:39:07.939945 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:07Z","lastTransitionTime":"2026-02-16T21:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.001770 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:31:59.912446353 +0000 UTC Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.025455 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.025586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.025755 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:08 crc kubenswrapper[4792]: E0216 21:39:08.025764 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:08 crc kubenswrapper[4792]: E0216 21:39:08.025937 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:08 crc kubenswrapper[4792]: E0216 21:39:08.026148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.041853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.041921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.041929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.041941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.041950 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.046466 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.062665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.078000 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.098511 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:39:06Z\\\",\\\"message\\\":\\\"5.997165 6887 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 21:39:05.997200 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mp8ql\\\\nI0216 21:39:05.997207 6887 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-szmc4 in node crc\\\\nI0216 21:39:05.997208 6887 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz in node crc\\\\nI0216 21:39:05.997217 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-sxb4b\\\\nI0216 21:39:05.997224 6887 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-szmc4 after 0 failed attempt(s)\\\\nI0216 21:39:05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:39:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.113658 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.125919 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.140431 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.144167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.144205 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.144224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.144254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.144268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.153922 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.168132 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.177589 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.196121 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.209506 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.229202 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.243544 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.247917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.247967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.247985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.248010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.248030 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.257970 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.270263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.286707 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:08Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.350266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.350308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.350322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.350341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.350396 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.453064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.453127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.453154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.453185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.453209 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.555995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.556033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.556044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.556059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.556071 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.659377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.659425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.659441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.659462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.659477 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.762914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.762992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.763014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.763043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.763064 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.865757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.866066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.866164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.866261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.866341 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.969886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.969945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.969965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.969987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:08 crc kubenswrapper[4792]: I0216 21:39:08.970004 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:08Z","lastTransitionTime":"2026-02-16T21:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.002742 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:52:11.42875944 +0000 UTC Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.026107 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:09 crc kubenswrapper[4792]: E0216 21:39:09.026258 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.072052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.072088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.072098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.072112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.072124 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.174471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.174513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.174522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.174538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.174548 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.276641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.276699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.276715 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.276738 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.276755 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.379045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.379097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.379115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.379139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.379156 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.481944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.481992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.482009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.482031 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.482047 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.585558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.585618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.585627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.585646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.585655 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.688737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.688810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.688835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.688863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.688886 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.792190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.792237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.792249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.792266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.792295 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.895302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.895379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.895402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.895431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.895452 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.998289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.998354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.998372 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.998395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:09 crc kubenswrapper[4792]: I0216 21:39:09.998413 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:09Z","lastTransitionTime":"2026-02-16T21:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.003462 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:36:07.297501413 +0000 UTC Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.026143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.026184 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:10 crc kubenswrapper[4792]: E0216 21:39:10.026303 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.026490 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:10 crc kubenswrapper[4792]: E0216 21:39:10.026542 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:10 crc kubenswrapper[4792]: E0216 21:39:10.026774 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.101831 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.101931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.101997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.102023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.102081 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.205119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.205182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.205199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.205221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.205237 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.308143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.308217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.308253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.308286 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.308309 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.411663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.411728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.411745 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.411769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.411787 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.515156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.515224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.515242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.515264 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.515282 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.618306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.618376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.618400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.618429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.618450 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.721543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.721645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.721684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.721715 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.721743 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.825127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.825196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.825213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.825235 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.825253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.927754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.927822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.927836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.927857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:10 crc kubenswrapper[4792]: I0216 21:39:10.927872 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:10Z","lastTransitionTime":"2026-02-16T21:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.004658 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:04:23.230086395 +0000 UTC Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.025158 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.025296 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.030399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.030434 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.030448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.030490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.030505 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.040096 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.132822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.132883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.132893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.132906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.132915 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.234760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.234814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.234828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.234869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.234882 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.337902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.337961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.337978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.338009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.338033 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.441554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.441994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.442012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.442036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.442053 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.545270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.545345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.545369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.545398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.545419 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.648654 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.649026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.649161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.649284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.649439 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.753260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.753812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.754044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.754250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.754431 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.825130 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.825384 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.825356356 +0000 UTC m=+148.478635247 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.825485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.825681 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.825732 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.825722046 +0000 UTC m=+148.479000937 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.857612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.857660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.857672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.857691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.857708 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.926765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.926868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.926935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.926959 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927000 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927021 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927098 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.927075527 +0000 UTC m=+148.580354458 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927090 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927174 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927199 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.92717245 +0000 UTC m=+148.580451381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927208 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927233 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:39:11 crc kubenswrapper[4792]: E0216 21:39:11.927316 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.927295384 +0000 UTC m=+148.580574315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.959805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.959850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.959860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.959878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:11 crc kubenswrapper[4792]: I0216 21:39:11.959889 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:11Z","lastTransitionTime":"2026-02-16T21:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.005451 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:50:42.998729207 +0000 UTC Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.025867 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.025882 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:12 crc kubenswrapper[4792]: E0216 21:39:12.026051 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.025892 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:12 crc kubenswrapper[4792]: E0216 21:39:12.026236 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:12 crc kubenswrapper[4792]: E0216 21:39:12.026135 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.063837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.063884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.063899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.063918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.063938 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.167016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.167076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.167102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.167132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.167155 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.270263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.270306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.270322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.270344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.270360 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.373499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.373549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.373562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.373580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.373593 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.475201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.475247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.475260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.475276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.475288 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.614530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.614929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.615140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.615366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.615633 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.719007 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.719362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.719593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.719766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.719946 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.822743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.822792 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.822804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.822819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.822834 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.925561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.925631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.925647 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.925664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:12 crc kubenswrapper[4792]: I0216 21:39:12.925677 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:12Z","lastTransitionTime":"2026-02-16T21:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.005936 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:25:14.004050815 +0000 UTC Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.025360 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:13 crc kubenswrapper[4792]: E0216 21:39:13.025641 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.027698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.027746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.027762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.027784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.027800 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.130394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.130781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.130921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.131085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.131208 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.234291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.234344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.234361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.234383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.234401 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.337074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.337154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.337179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.337212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.337254 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.440755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.440818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.440839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.440862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.440879 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.543804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.543890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.543915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.543944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.543963 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.647014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.647082 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.647100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.647130 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.647151 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.751007 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.751109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.751136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.751173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.751201 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.854881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.854955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.854978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.855010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.855032 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.958562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.958633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.958646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.958663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:13 crc kubenswrapper[4792]: I0216 21:39:13.958675 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:13Z","lastTransitionTime":"2026-02-16T21:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.006733 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:21:11.754920215 +0000 UTC Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.026209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.026281 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.026333 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:14 crc kubenswrapper[4792]: E0216 21:39:14.026510 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:14 crc kubenswrapper[4792]: E0216 21:39:14.026686 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:14 crc kubenswrapper[4792]: E0216 21:39:14.026886 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.061474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.061527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.061540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.061557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.061575 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.163614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.163642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.163649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.163661 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.163669 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.266835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.266889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.266909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.266937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.266950 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.370134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.370210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.370234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.370266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.370289 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.473735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.473800 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.473827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.473857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.473879 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.582177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.582247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.582269 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.582297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.582319 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.685167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.685241 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.685258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.685282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.685304 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.788421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.788490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.788513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.788541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.788564 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.891767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.891838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.891861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.891894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.891917 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.995281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.995342 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.995359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.995382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:14 crc kubenswrapper[4792]: I0216 21:39:14.995398 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:14Z","lastTransitionTime":"2026-02-16T21:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.007171 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:35:42.383243001 +0000 UTC Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.025516 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.025710 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.098408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.098454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.098468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.098486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.098500 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.201016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.201051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.201059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.201071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.201079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.304234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.304309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.304332 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.304359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.304383 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.407110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.407166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.407187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.407214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.407237 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.509808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.509877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.509905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.509933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.509955 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.614048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.614232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.614301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.614331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.614350 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.718098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.718548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.718570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.718648 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.718692 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.770054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.770090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.770103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.770119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.770131 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.791524 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.796437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.796472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.796481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.796493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.796503 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.815971 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.821451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.821522 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.821544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.821577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.821628 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.840865 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.848258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.848395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.848418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.848442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.848498 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.867975 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.873629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.873708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.873732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.873756 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.873776 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.891550 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:15Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:15 crc kubenswrapper[4792]: E0216 21:39:15.891705 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.893428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.893472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.893509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.893528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.893539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.995987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.996024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.996038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.996053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:15 crc kubenswrapper[4792]: I0216 21:39:15.996063 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:15Z","lastTransitionTime":"2026-02-16T21:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.008343 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:47:02.321465973 +0000 UTC Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.025837 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.025837 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.025950 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:16 crc kubenswrapper[4792]: E0216 21:39:16.026225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:16 crc kubenswrapper[4792]: E0216 21:39:16.026372 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:16 crc kubenswrapper[4792]: E0216 21:39:16.026491 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.041200 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.105298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.105361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.105384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.105415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.105440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.207882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.208288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.208498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.208746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.208959 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.311700 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.311768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.311786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.311811 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.311827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.415319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.415374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.415396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.415424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.415444 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.518346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.518420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.518442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.518472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.518492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.621307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.621786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.622009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.622191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.622346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.725764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.726138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.726306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.726457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.726715 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.829717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.829765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.829781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.829805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.829821 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.932664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.932760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.932784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.932814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:16 crc kubenswrapper[4792]: I0216 21:39:16.932838 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:16Z","lastTransitionTime":"2026-02-16T21:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.009273 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:42:40.814109649 +0000 UTC Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.026232 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:17 crc kubenswrapper[4792]: E0216 21:39:17.026689 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.035526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.035635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.035662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.035692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.035716 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.138094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.138143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.138159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.138178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.138189 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.240833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.240895 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.240909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.240927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.240939 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.344216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.344278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.344296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.344318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.344340 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.447904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.447975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.447992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.448016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.448034 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.550785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.550838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.550873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.550892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.550904 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.653883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.653958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.653984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.654013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.654037 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.758156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.758316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.758344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.758409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.758426 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.862359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.862398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.862410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.862434 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.862446 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.964379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.964417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.964429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.964445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:17 crc kubenswrapper[4792]: I0216 21:39:17.964459 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:17Z","lastTransitionTime":"2026-02-16T21:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.010251 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:15:35.153928348 +0000 UTC Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.025676 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.025856 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.026109 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:18 crc kubenswrapper[4792]: E0216 21:39:18.026096 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:18 crc kubenswrapper[4792]: E0216 21:39:18.026540 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:18 crc kubenswrapper[4792]: E0216 21:39:18.026724 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.042096 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28c126ad-b306-4954-b8d7-c20b31bd34c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2339925a0bd14050bedd2f7bed99705b97217e702a55d0449b0f789b44fdab31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2514fbab3e3e8134bb225f703f902cd69818c335bc1563ce5db1a3506b4b6765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2514fbab3e3e8134bb225f703f902cd69818c335bc1563ce5db1a3506b4b6765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.060121 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mp8ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f2095e9-5a78-45fb-a930-eacbd54ec73d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:38:59Z\\\",\\\"message\\\":\\\"2026-02-16T21:38:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6\\\\n2026-02-16T21:38:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_354d2676-4d4c-4b8c-92b2-3b035ca4c9a6 to /host/opt/cni/bin/\\\\n2026-02-16T21:38:14Z [verbose] multus-daemon started\\\\n2026-02-16T21:38:14Z [verbose] Readiness Indicator file check\\\\n2026-02-16T21:38:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svsrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mp8ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.066691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.066729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.066741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.066758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.066770 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.085120 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616c8c01-b6e2-4851-9729-888790cbbe63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T21:39:06Z\\\",\\\"message\\\":\\\"5.997165 6887 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 21:39:05.997200 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mp8ql\\\\nI0216 21:39:05.997207 6887 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-szmc4 in node crc\\\\nI0216 21:39:05.997208 6887 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz in node crc\\\\nI0216 21:39:05.997217 6887 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-sxb4b\\\\nI0216 21:39:05.997224 6887 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-szmc4 after 0 failed attempt(s)\\\\nI0216 21:39:05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:39:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vfrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rfdc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.100305 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3771a924-fabc-44f7-a2c8-8484df9700c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://890fdae4cc91d12d6e36f0b622157004981e7437a3afb79d2ef83502a0ebfe48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2f131ae558182d670a379b06037455bb8b7e544382e0a3f7f4116fd821ed0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwd47\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tv2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.128473 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvc86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sxb4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.151007 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.169389 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f759c59-befa-4d12-ab4b-c4e579fba2bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ac28413c5dac3335b251a2f7e6d5756e858f0a7556881fcfc37462e5340060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-szmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.169996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.170041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.170057 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.170079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.170094 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.189256 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-554x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67a11891-bd2f-46f7-beb7-7d1d70b3e6a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af441380da887d69fb38dc27640134910550be513bc7627acbdc9c51c6ab778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc76f0c26566bb20cd8c594fe7cd02f8eb03874438e23ebc2f78e1060b7a9fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f490e857deb0d9f7c9ad130b3a59ce2b7751b50f501b870a9d4e09dcbf970b92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://114a38399bdb68eefe61c889077f4d7232ce9e6de9db0304e1215d20899b1d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5c3be3c1776a2ace45c0fbe932718db9cede9332bd9601e55b723e9de10253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83a7801a6b3cd1828cb8c7f85df46dc0534ba4626e5fda355baf109ccddaf1e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cedb2d92ed421c60dc230ea13ea91f9f146d15daaad58d83d7c9b96da860d578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:38:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhwqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-554x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.207765 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba5a9200c288dafae974347824909de7f4ce80ee19a21b6b699759d12892bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.224743 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2vlsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6da7745-c9c0-44c9-93e5-77cc1dd1d074\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://494e9ee9e202a3a4be6d400fb95ecdac393cce81f9df671d99e20f2f6a610180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r4n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2vlsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.240481 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dgz2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51960a32-12c3-4050-99da-f97649c432c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a6c351748b1cd3c2b53e6e6c3d5cb4047d62d153ecd6b3367b1bf61a2cd049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rr5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:38:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dgz2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.269031 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f2683d-7046-4215-ae7d-9b4bde93d0d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bcd07cbde8fd7fe97e21acd507267258c43753d68be570d3eb3f6793d8475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cd5d94bb72a99095a80465b3eb73c088ccdfed1ee388af0e844fb87153e2e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6dde40b3c535993a2058ce48081cab7e9174dd411f0c7182404224518da95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0fb1f07005bffb8bc7e19057bd0a85e5175716a71fbefd75e67e57da13c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03155cbc0f9a67f42e3a420d4314c90242d64de20d345afaeb59c7f9456ca64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb1d2595dbdef65a582889d66375ee3123fac00deb54fe06c94be173bf7ea6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fb1d2595dbdef65a582889d66375ee3123fac00deb54fe06c94be173bf7ea6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa39ee8868e8e8e331dc51134f93be4a5e5e8da53f91d55fac40e7bcc005e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baa39ee8868e8e8e331dc51134f93be4a5e5e8da53f91d55fac40e7bcc005e8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4b732738052a7489309624f117563b400efea4fccbed73cc590532d54a7f8df0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b732738052a7489309624f117563b400efea4fccbed73cc590532d54a7f8df0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.274074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.274150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.274178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.274210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.274234 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.291470 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13fd12ca50d69da8ae914472fa02a08b3740a8f93abd899c0b70d77ccaa26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc598b73badd21afcac080572a1b6a282c7743d2b95d85e4355c20bd92f9f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.309721 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.322533 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.338787 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28ed63aa02f338d49b562ec35d593e83c8f0af064552794d23d51e5d37656cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.353107 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3b0e37d-7371-4ba6-aa2e-31298deeee83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbfbf3f8469e74e72430d87ebf361c5d13da2354363f99acc139b8e30179d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03758c239baf8278998e6e82dba71857c1fd4fff6899478ab85fb1b2f78a4cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9229e60d6d552eb26d664b21595b6a9f043fea67218ecf5617b81ae4723d73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.366358 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68f05192-f979-40cd-92aa-354bd6735d2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9896d54afafb06a643103717a6056e7fa18714af06237408c70a4aa4f8cd41df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5142297ef01185b89e07a10a68572aeef0fbd6496ff7d177494393d9dc6a2f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9c0065dfb1aa3d0793d49fd9c8cd10549a2a34b546ea03b43ee84d7f40f3997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d5d1d16375b0342156c258b8737efdf7ac2ef9dd2afe2423d568a371125b3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.376943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.376984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.376996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.377014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.377026 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.387748 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:37:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T21:38:08Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 21:38:07.919929 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 21:38:07.920063 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 21:38:07.920705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3584465928/tls.crt::/tmp/serving-cert-3584465928/tls.key\\\\\\\"\\\\nI0216 21:38:08.449063 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 21:38:08.454521 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 21:38:08.454543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 21:38:08.454561 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 21:38:08.454567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 21:38:08.461126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 21:38:08.461157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 21:38:08.461170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 21:38:08.461173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 21:38:08.461177 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 21:38:08.461181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 21:38:08.461288 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 21:38:08.462379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T21:38:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:38:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T21:37:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T21:37:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T21:37:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:37:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:18Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.479520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.479590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.479634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.479663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.479682 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.583492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.583559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.583582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.583656 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.583681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.686708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.686861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.686892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.686921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.686942 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.790403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.790471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.790494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.790518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.790535 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.892901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.892970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.892989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.893016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.893033 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.996550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.996696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.996722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.996754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:18 crc kubenswrapper[4792]: I0216 21:39:18.996778 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:18Z","lastTransitionTime":"2026-02-16T21:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.010779 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:29:25.533641448 +0000 UTC Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.025515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:19 crc kubenswrapper[4792]: E0216 21:39:19.025662 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.026334 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:19 crc kubenswrapper[4792]: E0216 21:39:19.026512 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.099846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.099889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.099899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.099914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.099929 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.202303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.202338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.202349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.202364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.202375 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.304907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.304968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.304979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.304995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.305009 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.408218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.408478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.408589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.408690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.408781 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.512098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.512411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.512682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.512896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.513072 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.616540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.616662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.616681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.616705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.616722 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.720359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.720721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.720906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.721045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.721164 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.824737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.824796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.824813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.824836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.824853 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.927183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.927222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.927233 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.927249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:19 crc kubenswrapper[4792]: I0216 21:39:19.927260 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:19Z","lastTransitionTime":"2026-02-16T21:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.011273 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:39:04.780529712 +0000 UTC Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.025814 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:20 crc kubenswrapper[4792]: E0216 21:39:20.025955 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.025824 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:20 crc kubenswrapper[4792]: E0216 21:39:20.026044 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.025833 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:20 crc kubenswrapper[4792]: E0216 21:39:20.026459 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.029877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.029918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.029928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.029943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.029954 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.132263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.132957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.133170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.133338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.133492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.236737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.236816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.236898 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.236936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.236962 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.339891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.339954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.339972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.339998 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.340016 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.443081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.443137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.443156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.443181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.443201 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.567925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.568018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.568054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.568081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.568099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.672330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.672394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.672413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.672439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.672465 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.775008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.775059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.775071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.775091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.775104 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.877366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.877407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.877416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.877429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.877438 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.981144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.981233 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.981257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.981289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:20 crc kubenswrapper[4792]: I0216 21:39:20.981311 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:20Z","lastTransitionTime":"2026-02-16T21:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.011504 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:49:13.969622194 +0000 UTC Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.025149 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:21 crc kubenswrapper[4792]: E0216 21:39:21.025539 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.084751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.084825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.084844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.084870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.084889 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.187781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.187866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.187896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.187930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.187954 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.290222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.290288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.290306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.290330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.290350 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.393063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.393096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.393104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.393116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.393125 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.496454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.496514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.496532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.496555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.496572 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.600143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.600246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.600272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.600302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.600325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.702981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.703050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.703073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.703103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.703125 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.806281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.806362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.806386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.806420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.806443 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.908970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.909044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.909070 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.909100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:21 crc kubenswrapper[4792]: I0216 21:39:21.909122 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:21Z","lastTransitionTime":"2026-02-16T21:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.011728 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:16:40.233926471 +0000 UTC Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.012753 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.012828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.012839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.012856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.012866 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.025740 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.025802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:22 crc kubenswrapper[4792]: E0216 21:39:22.025863 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:22 crc kubenswrapper[4792]: E0216 21:39:22.025939 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.025970 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:22 crc kubenswrapper[4792]: E0216 21:39:22.026251 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.115983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.116036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.116053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.116074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.116090 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.218740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.218775 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.218784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.218798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.218809 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.322053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.322134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.322153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.322177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.322195 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.425935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.426022 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.426032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.426053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.426065 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.528915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.528967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.528982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.529003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.529020 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.632712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.632799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.632826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.632863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.632894 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.736050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.736372 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.736444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.736524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.736587 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.840237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.840294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.840306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.840326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.840340 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.942652 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.942708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.942719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.942737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:22 crc kubenswrapper[4792]: I0216 21:39:22.942753 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:22Z","lastTransitionTime":"2026-02-16T21:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.011957 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:45:36.789905441 +0000 UTC Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.025232 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:23 crc kubenswrapper[4792]: E0216 21:39:23.025357 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.045772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.045827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.045846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.045865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.045880 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.148872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.148925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.148940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.148961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.148976 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.252217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.252278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.252290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.252303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.252313 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.355329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.355396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.355409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.355432 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.355447 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.458204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.458411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.458438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.458465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.458485 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.562250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.562322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.562340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.562365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.562387 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.666113 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.666179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.666198 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.666222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.666240 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.769173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.769236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.769258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.769287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.769312 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.871379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.871423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.871437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.871451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.871462 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.974912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.974988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.975016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.975044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:23 crc kubenswrapper[4792]: I0216 21:39:23.975067 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:23Z","lastTransitionTime":"2026-02-16T21:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.013619 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:55:59.077470949 +0000 UTC Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.026036 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.026137 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:24 crc kubenswrapper[4792]: E0216 21:39:24.026220 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.026422 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:24 crc kubenswrapper[4792]: E0216 21:39:24.026501 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:24 crc kubenswrapper[4792]: E0216 21:39:24.026729 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.078066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.078122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.078140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.078159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.078174 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.180352 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.180394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.180406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.180425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.180437 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.283341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.283412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.283435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.283466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.283489 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.387016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.387063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.387075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.387091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.387104 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.489883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.489955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.489973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.489999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.490043 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.592339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.592390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.592402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.592418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.592430 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.695507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.695567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.695634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.695678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.695696 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.798557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.798636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.798650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.798668 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.798683 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.901675 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.901743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.901760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.901787 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:24 crc kubenswrapper[4792]: I0216 21:39:24.901805 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:24Z","lastTransitionTime":"2026-02-16T21:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.005353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.005424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.005446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.005486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.005519 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.013746 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:25:26.545812228 +0000 UTC Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.025390 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:25 crc kubenswrapper[4792]: E0216 21:39:25.025565 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.108784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.108887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.108907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.108930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.108947 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.211399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.211480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.211498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.211521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.211539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.314812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.314867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.314883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.314905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.314923 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.417977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.418053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.418081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.418112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.418134 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.521067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.521100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.521124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.521136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.521144 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.623837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.623889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.623907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.623929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.623945 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.727778 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.727885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.727955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.727979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.727996 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.830975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.831043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.831060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.831081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.831099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.933485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.933543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.933555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.933573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.933587 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.954420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.954472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.954483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.954500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.954512 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: E0216 21:39:25.973587 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.978580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.978717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.978739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.978763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.978786 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:25 crc kubenswrapper[4792]: E0216 21:39:25.995219 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:25Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.999531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.999607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.999624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:25 crc kubenswrapper[4792]: I0216 21:39:25.999648 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:25.999662 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:25Z","lastTransitionTime":"2026-02-16T21:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.013916 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:58:11.452556229 +0000 UTC Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.013980 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.017886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.017930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.017950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.017974 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.017992 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.026175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.026197 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.026322 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.026321 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.026615 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.026862 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.034961 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.039073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.039109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.039120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.039135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.039146 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.061836 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T21:39:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f4590c4-5339-4c82-a413-234d08dabd4a\\\",\\\"systemUUID\\\":\\\"7cf4d510-eeff-4323-b01d-9568b7e39914\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T21:39:26Z is after 2025-08-24T17:21:41Z" Feb 16 21:39:26 crc kubenswrapper[4792]: E0216 21:39:26.061993 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.063530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.063571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.063613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.063632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.063647 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.167011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.167081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.167104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.167133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.167155 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.270151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.270205 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.270222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.270245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.270262 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.373042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.373110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.373131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.373161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.373186 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.476339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.476411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.476428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.476454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.476472 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.579309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.579361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.579375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.579392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.579404 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.681997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.682052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.682066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.682086 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.682101 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.784519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.784625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.784649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.784676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.784697 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.887395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.887510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.887538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.887568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.887643 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.991489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.991560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.991578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.991646 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:26 crc kubenswrapper[4792]: I0216 21:39:26.991669 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:26Z","lastTransitionTime":"2026-02-16T21:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.014447 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:10:51.222976212 +0000 UTC Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.025066 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:27 crc kubenswrapper[4792]: E0216 21:39:27.025206 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.094591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.094705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.094722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.094745 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.094761 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.197943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.197982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.197991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.198004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.198019 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.300262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.300300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.300311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.300325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.300335 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.402224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.402274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.402287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.402303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.402316 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.505201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.505271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.505288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.505312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.505330 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.608291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.608355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.608377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.608405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.608425 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.713636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.713702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.713721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.713746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.713763 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.817162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.817281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.817310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.817376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.817395 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.919796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.919857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.919874 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.919898 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:27 crc kubenswrapper[4792]: I0216 21:39:27.919915 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:27Z","lastTransitionTime":"2026-02-16T21:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.014832 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:11:44.150894665 +0000 UTC Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.022564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.022638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.022650 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.022664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.022674 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.025566 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:28 crc kubenswrapper[4792]: E0216 21:39:28.025767 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.025811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.025858 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:28 crc kubenswrapper[4792]: E0216 21:39:28.025933 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:28 crc kubenswrapper[4792]: E0216 21:39:28.026046 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.076704 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.076685196 podStartE2EDuration="1m19.076685196s" podCreationTimestamp="2026-02-16 21:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.075731499 +0000 UTC m=+100.729010470" watchObservedRunningTime="2026-02-16 21:39:28.076685196 +0000 UTC m=+100.729964097" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.113930 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.113911202 podStartE2EDuration="44.113911202s" podCreationTimestamp="2026-02-16 21:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.093125383 +0000 UTC m=+100.746404314" watchObservedRunningTime="2026-02-16 21:39:28.113911202 +0000 UTC m=+100.767190113" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.124504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.124540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.124574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.124590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.124618 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.127862 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.12783946 podStartE2EDuration="1m19.12783946s" podCreationTimestamp="2026-02-16 21:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.114288763 +0000 UTC m=+100.767567664" watchObservedRunningTime="2026-02-16 21:39:28.12783946 +0000 UTC m=+100.781118371" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.146260 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.146242852 podStartE2EDuration="12.146242852s" podCreationTimestamp="2026-02-16 21:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.127828079 +0000 UTC m=+100.781106980" watchObservedRunningTime="2026-02-16 21:39:28.146242852 +0000 UTC m=+100.799521743" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.146692 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mp8ql" podStartSLOduration=76.146684405 podStartE2EDuration="1m16.146684405s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.146146369 +0000 UTC m=+100.799425270" watchObservedRunningTime="2026-02-16 21:39:28.146684405 +0000 UTC m=+100.799963316" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.180127 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tv2mz" podStartSLOduration=76.180113835 podStartE2EDuration="1m16.180113835s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.179353164 +0000 UTC m=+100.832632065" watchObservedRunningTime="2026-02-16 21:39:28.180113835 +0000 UTC m=+100.833392726" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.210314 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podStartSLOduration=76.210298005 podStartE2EDuration="1m16.210298005s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.209939305 +0000 UTC m=+100.863218206" watchObservedRunningTime="2026-02-16 21:39:28.210298005 +0000 UTC m=+100.863576906" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.226918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.227147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.227242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.227329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.227391 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.229296 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-554x7" podStartSLOduration=76.229285364 podStartE2EDuration="1m16.229285364s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.228973474 +0000 UTC m=+100.882252375" watchObservedRunningTime="2026-02-16 21:39:28.229285364 +0000 UTC m=+100.882564255" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.256643 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dgz2t" podStartSLOduration=76.256626904 podStartE2EDuration="1m16.256626904s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.256547862 +0000 UTC m=+100.909826753" watchObservedRunningTime="2026-02-16 21:39:28.256626904 +0000 UTC m=+100.909905795" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.256801 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2vlsf" podStartSLOduration=76.256797489 podStartE2EDuration="1m16.256797489s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.248719165 +0000 UTC m=+100.901998066" watchObservedRunningTime="2026-02-16 21:39:28.256797489 +0000 UTC m=+100.910076380" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.277647 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.277627819 podStartE2EDuration="17.277627819s" podCreationTimestamp="2026-02-16 21:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.27624859 +0000 UTC m=+100.929527481" watchObservedRunningTime="2026-02-16 21:39:28.277627819 +0000 UTC m=+100.930906710" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.329712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.329988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.330075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.330169 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.330258 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.432542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.432578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.432588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.432619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.432630 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.534111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.534145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.534157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.534172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.534182 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.636245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.636276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.636284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.636297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.636310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.739074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.739139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.739158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.739182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.739199 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.841913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.841976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.841994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.842015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.842031 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.944767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.944838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.944857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.944882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:28 crc kubenswrapper[4792]: I0216 21:39:28.944901 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:28Z","lastTransitionTime":"2026-02-16T21:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.015555 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:32:41.09971343 +0000 UTC Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.025879 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:29 crc kubenswrapper[4792]: E0216 21:39:29.026027 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.047374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.047451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.047477 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.047509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.047533 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.150411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.150451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.150461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.150488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.150499 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.253110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.253157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.253169 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.253184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.253196 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.355769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.355818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.355827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.355845 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.355867 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.457981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.458026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.458036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.458052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.458063 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.560467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.560570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.560676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.560711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.560734 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.663416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.663506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.663530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.663555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.663575 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.766183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.766220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.766230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.766244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.766253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.869565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.869666 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.869676 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.869690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.869701 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.972244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.972288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.972301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.972318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:29 crc kubenswrapper[4792]: I0216 21:39:29.972330 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:29Z","lastTransitionTime":"2026-02-16T21:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.015817 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:41:40.851772046 +0000 UTC Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.025222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.025288 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.025249 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:30 crc kubenswrapper[4792]: E0216 21:39:30.025417 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:30 crc kubenswrapper[4792]: E0216 21:39:30.025511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:30 crc kubenswrapper[4792]: E0216 21:39:30.025625 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.075179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.075237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.075255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.075281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.075298 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.191262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.191319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.191335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.191361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.191379 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.294069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.294142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.294167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.294191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.294207 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.396560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.396637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.396649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.396668 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.396680 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.498968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.499038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.499063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.499094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.499118 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.595077 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:30 crc kubenswrapper[4792]: E0216 21:39:30.595237 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:39:30 crc kubenswrapper[4792]: E0216 21:39:30.595343 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs podName:9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8 nodeName:}" failed. No retries permitted until 2026-02-16 21:40:34.595326239 +0000 UTC m=+167.248605130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs") pod "network-metrics-daemon-sxb4b" (UID: "9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.602104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.602170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.602197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.602807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.602848 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.706413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.706506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.706543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.706574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.706666 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.810285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.810367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.810398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.810428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.810451 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.914027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.914125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.914323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.914355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:30 crc kubenswrapper[4792]: I0216 21:39:30.914374 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:30Z","lastTransitionTime":"2026-02-16T21:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.015941 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:00:12.208175447 +0000 UTC Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.016498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.016544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.016563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.016589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.016644 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.025945 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:31 crc kubenswrapper[4792]: E0216 21:39:31.026103 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.026936 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:31 crc kubenswrapper[4792]: E0216 21:39:31.027212 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.119680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.119762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.119786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.119815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.119836 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.222428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.222472 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.222485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.222503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.222514 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.324702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.324743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.324751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.324765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.324774 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.427175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.427215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.427223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.427237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.427245 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.528997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.529074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.529094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.529118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.529140 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.632658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.632706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.632726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.632748 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.632765 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.736325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.736390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.736408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.736437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.736454 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.839861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.839921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.839937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.839959 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.839975 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.943080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.943126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.943142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.943163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:31 crc kubenswrapper[4792]: I0216 21:39:31.943179 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:31Z","lastTransitionTime":"2026-02-16T21:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.016252 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:58:36.622360995 +0000 UTC Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.025672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.025695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.025997 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:32 crc kubenswrapper[4792]: E0216 21:39:32.025895 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:32 crc kubenswrapper[4792]: E0216 21:39:32.026131 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:32 crc kubenswrapper[4792]: E0216 21:39:32.026374 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.046706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.046750 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.046762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.046780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.046791 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.149940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.150022 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.150045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.150077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.150099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.252730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.252804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.252821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.252846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.252863 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.355799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.355865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.355882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.355905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.355923 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.458861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.458920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.458939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.458968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.458987 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.562414 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.562487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.562504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.562527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.562544 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.665445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.665500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.665512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.665526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.665539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.767822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.767860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.767869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.767883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.767893 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.871121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.871212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.871237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.871267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.871288 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.974498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.974565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.974582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.974659 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:32 crc kubenswrapper[4792]: I0216 21:39:32.974678 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:32Z","lastTransitionTime":"2026-02-16T21:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.017485 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:52:48.174904872 +0000 UTC Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.026095 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:33 crc kubenswrapper[4792]: E0216 21:39:33.026272 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.077200 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.077261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.077283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.077314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.077340 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.180283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.180377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.180395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.180421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.180440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.283655 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.283735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.283770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.283801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.283824 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.385749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.385783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.385791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.385806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.385816 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.489335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.489408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.489428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.489857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.489889 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.592236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.592279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.592293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.592315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.592329 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.695947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.696005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.696022 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.696047 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.696064 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.798726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.798815 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.798838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.798873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.798898 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.901833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.901878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.901889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.901905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:33 crc kubenswrapper[4792]: I0216 21:39:33.901916 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:33Z","lastTransitionTime":"2026-02-16T21:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.003946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.003979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.003989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.004004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.004015 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.018419 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:20:03.246302432 +0000 UTC Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.026511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:34 crc kubenswrapper[4792]: E0216 21:39:34.026706 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.026957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:34 crc kubenswrapper[4792]: E0216 21:39:34.027063 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.027350 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:34 crc kubenswrapper[4792]: E0216 21:39:34.027446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.105812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.105847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.105856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.105871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.105905 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.208752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.208798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.208812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.208830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.208844 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.311940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.311982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.311994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.312018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.312032 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.415033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.415078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.415098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.415126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.415150 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.517979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.518024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.518035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.518050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.518060 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.620539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.620582 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.620622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.620636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.620645 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.722563 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.722636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.722660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.722682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.722695 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.825428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.825496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.825512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.825535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.825553 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.927914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.927951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.927970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.927986 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:34 crc kubenswrapper[4792]: I0216 21:39:34.927997 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:34Z","lastTransitionTime":"2026-02-16T21:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.019209 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:45:20.961213971 +0000 UTC Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.025553 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:35 crc kubenswrapper[4792]: E0216 21:39:35.025802 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.031000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.031074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.031097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.031131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.031169 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.134642 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.134703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.134727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.134756 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.134782 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.238109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.238210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.238271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.238297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.238315 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.341849 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.341916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.341932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.341955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.341973 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.445153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.445266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.445275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.445292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.445301 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.548467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.548531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.548550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.548573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.548590 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.651393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.651477 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.651500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.651529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.651547 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.754238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.754305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.754322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.754344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.754360 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.856908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.856961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.856976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.856994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.857006 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.958857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.958922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.958932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.958948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:35 crc kubenswrapper[4792]: I0216 21:39:35.958957 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:35Z","lastTransitionTime":"2026-02-16T21:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.020420 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:25:24.012943558 +0000 UTC Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.025787 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.025788 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.025865 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:36 crc kubenswrapper[4792]: E0216 21:39:36.026023 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:36 crc kubenswrapper[4792]: E0216 21:39:36.026183 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:36 crc kubenswrapper[4792]: E0216 21:39:36.026286 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.062178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.062237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.062255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.062279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.062297 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:36Z","lastTransitionTime":"2026-02-16T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.154186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.154254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.154279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.154308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.154329 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T21:39:36Z","lastTransitionTime":"2026-02-16T21:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.215884 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb"] Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.216491 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.219094 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.219358 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.219475 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.219756 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.355963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b02ae05-af8e-4593-b9d0-9d7a35d00030-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.356048 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.356089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.356182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b02ae05-af8e-4593-b9d0-9d7a35d00030-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.356238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b02ae05-af8e-4593-b9d0-9d7a35d00030-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b02ae05-af8e-4593-b9d0-9d7a35d00030-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457701 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b02ae05-af8e-4593-b9d0-9d7a35d00030-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457756 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b02ae05-af8e-4593-b9d0-9d7a35d00030-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.457962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.458008 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1b02ae05-af8e-4593-b9d0-9d7a35d00030-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.459522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b02ae05-af8e-4593-b9d0-9d7a35d00030-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.469338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b02ae05-af8e-4593-b9d0-9d7a35d00030-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.486471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b02ae05-af8e-4593-b9d0-9d7a35d00030-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srmqb\" (UID: \"1b02ae05-af8e-4593-b9d0-9d7a35d00030\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:36 crc kubenswrapper[4792]: I0216 21:39:36.534314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.021294 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:01:17.306443053 +0000 UTC Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.021386 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.025780 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:37 crc kubenswrapper[4792]: E0216 21:39:37.025930 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.031894 4792 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.569808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" event={"ID":"1b02ae05-af8e-4593-b9d0-9d7a35d00030","Type":"ContainerStarted","Data":"8a26dc7f92b70c43e01c650cccd3f291abf2135417d28cc160fca81ce663cc98"} Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.569901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" event={"ID":"1b02ae05-af8e-4593-b9d0-9d7a35d00030","Type":"ContainerStarted","Data":"d3b27ddf2e5311b4124a439f10c2e19758772e531819128fdaf6ebfcac07aa68"} Feb 16 21:39:37 crc kubenswrapper[4792]: I0216 21:39:37.591475 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srmqb" podStartSLOduration=85.591446915 podStartE2EDuration="1m25.591446915s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:37.589388779 +0000 UTC m=+110.242667750" watchObservedRunningTime="2026-02-16 21:39:37.591446915 +0000 UTC m=+110.244725846" Feb 16 21:39:38 crc kubenswrapper[4792]: I0216 21:39:38.025862 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:38 crc kubenswrapper[4792]: I0216 21:39:38.026015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:38 crc kubenswrapper[4792]: I0216 21:39:38.026047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:38 crc kubenswrapper[4792]: E0216 21:39:38.030222 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:38 crc kubenswrapper[4792]: E0216 21:39:38.030354 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:38 crc kubenswrapper[4792]: E0216 21:39:38.030575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:39 crc kubenswrapper[4792]: I0216 21:39:39.025637 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:39 crc kubenswrapper[4792]: E0216 21:39:39.025843 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:40 crc kubenswrapper[4792]: I0216 21:39:40.026065 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:40 crc kubenswrapper[4792]: E0216 21:39:40.026194 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:40 crc kubenswrapper[4792]: I0216 21:39:40.026394 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:40 crc kubenswrapper[4792]: E0216 21:39:40.026453 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:40 crc kubenswrapper[4792]: I0216 21:39:40.026591 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:40 crc kubenswrapper[4792]: E0216 21:39:40.026698 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:41 crc kubenswrapper[4792]: I0216 21:39:41.025652 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:41 crc kubenswrapper[4792]: E0216 21:39:41.025872 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:42 crc kubenswrapper[4792]: I0216 21:39:42.026290 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:42 crc kubenswrapper[4792]: E0216 21:39:42.026537 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:42 crc kubenswrapper[4792]: I0216 21:39:42.026768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:42 crc kubenswrapper[4792]: E0216 21:39:42.026905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:42 crc kubenswrapper[4792]: I0216 21:39:42.027121 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:42 crc kubenswrapper[4792]: E0216 21:39:42.027277 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:43 crc kubenswrapper[4792]: I0216 21:39:43.025486 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:43 crc kubenswrapper[4792]: E0216 21:39:43.025618 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:44 crc kubenswrapper[4792]: I0216 21:39:44.026197 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:44 crc kubenswrapper[4792]: I0216 21:39:44.026577 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:44 crc kubenswrapper[4792]: I0216 21:39:44.026669 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:44 crc kubenswrapper[4792]: E0216 21:39:44.027188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:44 crc kubenswrapper[4792]: E0216 21:39:44.027341 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:44 crc kubenswrapper[4792]: E0216 21:39:44.027468 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:44 crc kubenswrapper[4792]: I0216 21:39:44.027757 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:44 crc kubenswrapper[4792]: E0216 21:39:44.028074 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rfdc5_openshift-ovn-kubernetes(616c8c01-b6e2-4851-9729-888790cbbe63)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" Feb 16 21:39:45 crc kubenswrapper[4792]: I0216 21:39:45.025573 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:45 crc kubenswrapper[4792]: E0216 21:39:45.025864 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.025494 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.025552 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:46 crc kubenswrapper[4792]: E0216 21:39:46.025744 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.025813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:46 crc kubenswrapper[4792]: E0216 21:39:46.025972 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:46 crc kubenswrapper[4792]: E0216 21:39:46.026119 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.605417 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/1.log" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.606066 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/0.log" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.606146 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" containerID="363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838" exitCode=1 Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.606193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerDied","Data":"363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838"} Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.606245 4792 scope.go:117] "RemoveContainer" containerID="14145b5f92ca0883d554631b2e02cf4880684bb94d790669dcf9a1962e6279a2" Feb 16 21:39:46 crc kubenswrapper[4792]: I0216 21:39:46.606804 4792 scope.go:117] "RemoveContainer" containerID="363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838" Feb 16 21:39:46 crc kubenswrapper[4792]: E0216 21:39:46.607063 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mp8ql_openshift-multus(3f2095e9-5a78-45fb-a930-eacbd54ec73d)\"" pod="openshift-multus/multus-mp8ql" podUID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" Feb 16 21:39:47 crc kubenswrapper[4792]: I0216 21:39:47.025798 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:47 crc kubenswrapper[4792]: E0216 21:39:47.026775 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:47 crc kubenswrapper[4792]: I0216 21:39:47.612218 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/1.log" Feb 16 21:39:48 crc kubenswrapper[4792]: I0216 21:39:48.025856 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:48 crc kubenswrapper[4792]: I0216 21:39:48.025932 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:48 crc kubenswrapper[4792]: I0216 21:39:48.025969 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:48 crc kubenswrapper[4792]: E0216 21:39:48.027924 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:48 crc kubenswrapper[4792]: E0216 21:39:48.028059 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:48 crc kubenswrapper[4792]: E0216 21:39:48.028255 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:48 crc kubenswrapper[4792]: E0216 21:39:48.032380 4792 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 21:39:48 crc kubenswrapper[4792]: E0216 21:39:48.112881 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:39:49 crc kubenswrapper[4792]: I0216 21:39:49.025485 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:49 crc kubenswrapper[4792]: E0216 21:39:49.025837 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:50 crc kubenswrapper[4792]: I0216 21:39:50.026226 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:50 crc kubenswrapper[4792]: I0216 21:39:50.026394 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:50 crc kubenswrapper[4792]: E0216 21:39:50.026460 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:50 crc kubenswrapper[4792]: E0216 21:39:50.026684 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:50 crc kubenswrapper[4792]: I0216 21:39:50.026831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:50 crc kubenswrapper[4792]: E0216 21:39:50.026931 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:51 crc kubenswrapper[4792]: I0216 21:39:51.025705 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:51 crc kubenswrapper[4792]: E0216 21:39:51.025955 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:52 crc kubenswrapper[4792]: I0216 21:39:52.025363 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:52 crc kubenswrapper[4792]: I0216 21:39:52.025396 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:52 crc kubenswrapper[4792]: E0216 21:39:52.025575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:52 crc kubenswrapper[4792]: I0216 21:39:52.025705 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:52 crc kubenswrapper[4792]: E0216 21:39:52.025859 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:52 crc kubenswrapper[4792]: E0216 21:39:52.025982 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:53 crc kubenswrapper[4792]: I0216 21:39:53.025309 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:53 crc kubenswrapper[4792]: E0216 21:39:53.025493 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:53 crc kubenswrapper[4792]: E0216 21:39:53.114852 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:39:54 crc kubenswrapper[4792]: I0216 21:39:54.025812 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:54 crc kubenswrapper[4792]: I0216 21:39:54.025957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:54 crc kubenswrapper[4792]: E0216 21:39:54.026046 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:54 crc kubenswrapper[4792]: I0216 21:39:54.026109 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:54 crc kubenswrapper[4792]: E0216 21:39:54.026247 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:54 crc kubenswrapper[4792]: E0216 21:39:54.026840 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:55 crc kubenswrapper[4792]: I0216 21:39:55.026119 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:55 crc kubenswrapper[4792]: E0216 21:39:55.026300 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:56 crc kubenswrapper[4792]: I0216 21:39:56.025730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:56 crc kubenswrapper[4792]: I0216 21:39:56.025817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:56 crc kubenswrapper[4792]: I0216 21:39:56.025861 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:56 crc kubenswrapper[4792]: E0216 21:39:56.026033 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:56 crc kubenswrapper[4792]: E0216 21:39:56.026302 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:56 crc kubenswrapper[4792]: E0216 21:39:56.026406 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:57 crc kubenswrapper[4792]: I0216 21:39:57.025445 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:57 crc kubenswrapper[4792]: E0216 21:39:57.025621 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.025642 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.025642 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.025775 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:39:58 crc kubenswrapper[4792]: E0216 21:39:58.027733 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:58 crc kubenswrapper[4792]: E0216 21:39:58.028024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:39:58 crc kubenswrapper[4792]: E0216 21:39:58.028423 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.028989 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:39:58 crc kubenswrapper[4792]: E0216 21:39:58.115435 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.653836 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/3.log" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.656673 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerStarted","Data":"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382"} Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.657085 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.690395 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podStartSLOduration=106.690371972 podStartE2EDuration="1m46.690371972s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:58.689017624 +0000 UTC m=+131.342296705" watchObservedRunningTime="2026-02-16 21:39:58.690371972 +0000 UTC m=+131.343650903" Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.992806 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sxb4b"] Feb 16 21:39:58 crc kubenswrapper[4792]: I0216 21:39:58.992989 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:39:58 crc kubenswrapper[4792]: E0216 21:39:58.993139 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:39:59 crc kubenswrapper[4792]: I0216 21:39:59.026288 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:39:59 crc kubenswrapper[4792]: E0216 21:39:59.026387 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:39:59 crc kubenswrapper[4792]: I0216 21:39:59.026855 4792 scope.go:117] "RemoveContainer" containerID="363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838" Feb 16 21:39:59 crc kubenswrapper[4792]: I0216 21:39:59.662325 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/1.log" Feb 16 21:39:59 crc kubenswrapper[4792]: I0216 21:39:59.662729 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerStarted","Data":"664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436"} Feb 16 21:40:00 crc kubenswrapper[4792]: I0216 21:40:00.025432 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:00 crc kubenswrapper[4792]: I0216 21:40:00.025482 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:00 crc kubenswrapper[4792]: E0216 21:40:00.025687 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:40:00 crc kubenswrapper[4792]: E0216 21:40:00.025776 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:40:01 crc kubenswrapper[4792]: I0216 21:40:01.025701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:01 crc kubenswrapper[4792]: I0216 21:40:01.025721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:01 crc kubenswrapper[4792]: E0216 21:40:01.025917 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:40:01 crc kubenswrapper[4792]: E0216 21:40:01.026207 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:40:02 crc kubenswrapper[4792]: I0216 21:40:02.025891 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:02 crc kubenswrapper[4792]: I0216 21:40:02.025927 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:02 crc kubenswrapper[4792]: E0216 21:40:02.026089 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 21:40:02 crc kubenswrapper[4792]: E0216 21:40:02.026194 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 21:40:03 crc kubenswrapper[4792]: I0216 21:40:03.025260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:03 crc kubenswrapper[4792]: I0216 21:40:03.025319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:03 crc kubenswrapper[4792]: E0216 21:40:03.025414 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sxb4b" podUID="9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8" Feb 16 21:40:03 crc kubenswrapper[4792]: E0216 21:40:03.025511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 21:40:04 crc kubenswrapper[4792]: I0216 21:40:04.026267 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:04 crc kubenswrapper[4792]: I0216 21:40:04.026336 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:04 crc kubenswrapper[4792]: I0216 21:40:04.029866 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:40:04 crc kubenswrapper[4792]: I0216 21:40:04.034999 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.025578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.025660 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.028847 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.029752 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.032731 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:40:05 crc kubenswrapper[4792]: I0216 21:40:05.032746 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.479348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.536467 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.537315 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.538560 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5jwvl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.539995 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.540314 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ncn6b"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.541096 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.545108 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.545853 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546218 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546376 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546410 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546374 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546487 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546843 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.546996 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.547225 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.547415 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.547913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548233 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548561 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548574 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548758 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548888 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.548906 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.549524 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.549645 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.549827 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.550455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.550664 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.551068 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.551431 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.551987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.560931 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.563864 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.564413 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.564679 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.565339 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.565541 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.570075 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.586404 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.586563 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gd457"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.587123 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.587407 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.587753 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.587992 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588163 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588214 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588311 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588021 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588461 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588555 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588575 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588165 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.588917 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.589125 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.589195 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.589428 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.589132 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g67z5"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.590104 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591008 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591197 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591325 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591457 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591540 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.591799 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.592117 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.590110 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.592296 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.592560 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.593161 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.593299 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.600793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.601986 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9fln"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.602441 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.602776 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.602849 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.603220 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.603349 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.603749 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.604211 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.604641 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.605231 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.605562 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.605859 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.606166 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.606539 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sn4zb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.606849 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.607307 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.607911 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.610718 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.611100 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bnsxs"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.611539 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.611817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.616516 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.617864 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kvt2"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.618554 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.618949 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.619056 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.619199 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.619333 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.619641 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.620254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.620872 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.621073 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.625275 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2k2ct"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.645641 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.646099 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.646421 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.646670 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.647059 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.647274 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.647430 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.647705 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.649221 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.649310 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.649923 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.649996 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.650652 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.651367 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.651639 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.653000 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.653062 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.653152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.651371 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.653290 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.658406 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.666426 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.666915 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667130 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667224 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667349 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667462 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667520 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667464 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.667803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.668392 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.668964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qh5\" (UniqueName: \"kubernetes.io/projected/59a735fb-20bd-48e7-9c0c-f79fe28c6190-kube-api-access-j5qh5\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669006 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669066 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-client\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669093 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669121 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669147 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8xk\" (UniqueName: \"kubernetes.io/projected/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-kube-api-access-cs8xk\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-auth-proxy-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669253 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-policies\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669254 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669272 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-dir\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669369 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0712775-7995-4058-9326-15ae6f90a6fe-machine-approver-tls\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-serving-cert\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnnb\" (UniqueName: \"kubernetes.io/projected/5e2db923-4a84-4a7d-8507-065f4920080d-kube-api-access-fpnnb\") pod \"downloads-7954f5f757-gd457\" (UID: \"5e2db923-4a84-4a7d-8507-065f4920080d\") " pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46dg8\" (UniqueName: \"kubernetes.io/projected/b3f992b5-86f2-4dff-b132-b7b22e6e9629-kube-api-access-46dg8\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669468 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-encryption-config\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f992b5-86f2-4dff-b132-b7b22e6e9629-metrics-tls\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.669580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sxk\" (UniqueName: \"kubernetes.io/projected/e0712775-7995-4058-9326-15ae6f90a6fe-kube-api-access-f2sxk\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.670915 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.670956 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-72gf6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.671430 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.671501 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.671521 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.671663 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.671782 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.672052 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.672474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.672486 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.672502 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.673277 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.675009 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.675951 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.676250 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.676905 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.677299 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.677747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.677895 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.677897 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.678327 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.678543 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.679865 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.681773 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.682307 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.683874 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.688663 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.688992 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdk54"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.689389 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.689725 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.690050 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.690316 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5jwvl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.690441 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.691983 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.695706 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.696538 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.702240 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xcvfd"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.702831 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.703185 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.707089 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.708774 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.709312 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.710640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.717296 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.720229 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.727976 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.728739 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.729501 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.729964 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.730952 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.732648 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ncn6b"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.732723 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hjb5c"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.733685 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.735097 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v962t"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.736524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.737541 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g67z5"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.738523 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.739211 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.740165 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gd457"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.741694 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9fln"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.744664 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.746144 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.748358 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kvt2"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.750018 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.751988 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.754962 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.756313 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.757788 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bnsxs"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.758902 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.760057 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.763882 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-72gf6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.766567 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f992b5-86f2-4dff-b132-b7b22e6e9629-metrics-tls\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-encryption-config\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2sxk\" (UniqueName: \"kubernetes.io/projected/e0712775-7995-4058-9326-15ae6f90a6fe-kube-api-access-f2sxk\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qh5\" (UniqueName: \"kubernetes.io/projected/59a735fb-20bd-48e7-9c0c-f79fe28c6190-kube-api-access-j5qh5\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-client\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770423 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770439 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770477 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8xk\" (UniqueName: \"kubernetes.io/projected/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-kube-api-access-cs8xk\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-auth-proxy-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770532 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-policies\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-dir\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770585 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpnnb\" (UniqueName: \"kubernetes.io/projected/5e2db923-4a84-4a7d-8507-065f4920080d-kube-api-access-fpnnb\") pod \"downloads-7954f5f757-gd457\" (UID: \"5e2db923-4a84-4a7d-8507-065f4920080d\") " pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0712775-7995-4058-9326-15ae6f90a6fe-machine-approver-tls\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-serving-cert\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.770730 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46dg8\" (UniqueName: \"kubernetes.io/projected/b3f992b5-86f2-4dff-b132-b7b22e6e9629-kube-api-access-46dg8\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.771453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-dir\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.771856 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.772279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.772398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.772623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-audit-policies\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.773175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.773380 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0712775-7995-4058-9326-15ae6f90a6fe-auth-proxy-config\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.773540 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9zpgg"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.775012 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.775095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.775333 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z8w5w"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.776330 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.776898 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.778715 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.779366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-encryption-config\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.779450 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.779624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0712775-7995-4058-9326-15ae6f90a6fe-machine-approver-tls\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.779880 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.780056 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.780156 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.780763 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-serving-cert\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.781512 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v962t"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.781565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59a735fb-20bd-48e7-9c0c-f79fe28c6190-etcd-client\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.782445 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.784761 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.784785 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.785988 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.786674 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sn4zb"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.787473 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdk54"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.788823 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.789788 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xcvfd"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.791341 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hjb5c"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.792444 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.793660 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.794866 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.796335 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.798028 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z8w5w"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.799264 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh"] Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.799742 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.820710 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.839820 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.861113 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.880690 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.900578 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.919588 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.939310 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.960369 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 21:40:06 crc kubenswrapper[4792]: I0216 21:40:06.980080 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.000180 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.020743 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.040804 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.061719 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.080245 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.085803 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3f992b5-86f2-4dff-b132-b7b22e6e9629-metrics-tls\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.100396 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.140453 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.160024 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.179372 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.200496 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.220744 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.240668 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.259971 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.280305 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.299781 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.320972 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.339681 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.360030 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.380033 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.399870 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.431742 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.440955 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.479988 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.500578 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.519922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.540185 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.560387 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.580424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.610130 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.619796 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.640019 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.660425 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.679940 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.698043 4792 request.go:700] Waited for 1.019846156s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.699687 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.720020 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.740828 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.759905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.779999 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.800194 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.820288 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.841099 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.860726 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.880061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.900548 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.920234 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.940665 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.961046 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:40:07 crc kubenswrapper[4792]: I0216 21:40:07.980192 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.000168 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.011461 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.019518 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.039875 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.060501 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.080407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.100552 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.130190 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.140333 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.160212 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.179106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.200719 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.220424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.239936 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.260501 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.279669 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.300294 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.320724 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.340472 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.359675 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.379752 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.400522 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.420142 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.440772 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.488919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46dg8\" (UniqueName: \"kubernetes.io/projected/b3f992b5-86f2-4dff-b132-b7b22e6e9629-kube-api-access-46dg8\") pod \"dns-operator-744455d44c-6kvt2\" (UID: \"b3f992b5-86f2-4dff-b132-b7b22e6e9629\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.510251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpnnb\" (UniqueName: \"kubernetes.io/projected/5e2db923-4a84-4a7d-8507-065f4920080d-kube-api-access-fpnnb\") pod \"downloads-7954f5f757-gd457\" (UID: \"5e2db923-4a84-4a7d-8507-065f4920080d\") " pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.521923 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.532410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cb00c52-ac92-41bb-8b6a-08d31f4932cb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4f6mr\" (UID: \"8cb00c52-ac92-41bb-8b6a-08d31f4932cb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.546138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qh5\" (UniqueName: \"kubernetes.io/projected/59a735fb-20bd-48e7-9c0c-f79fe28c6190-kube-api-access-j5qh5\") pod \"apiserver-7bbb656c7d-nf4fz\" (UID: \"59a735fb-20bd-48e7-9c0c-f79fe28c6190\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.617981 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.618215 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.619915 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.621867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8xk\" (UniqueName: \"kubernetes.io/projected/5f3c5727-093c-443f-aac8-dd7f2e5ab7f8-kube-api-access-cs8xk\") pod \"openshift-apiserver-operator-796bbdcf4f-t4mfn\" (UID: \"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.622382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2sxk\" (UniqueName: \"kubernetes.io/projected/e0712775-7995-4058-9326-15ae6f90a6fe-kube-api-access-f2sxk\") pod \"machine-approver-56656f9798-7tzmh\" (UID: \"e0712775-7995-4058-9326-15ae6f90a6fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.631568 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.640030 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.660247 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.661256 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.681523 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.698206 4792 request.go:700] Waited for 1.921598415s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.700085 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.725448 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.761494 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gd457"] Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.795080 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.812859 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5r5z\" (UniqueName: \"kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818432 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-serving-cert\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818458 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-service-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818483 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba97d89e-7ec1-423e-b15a-a44253eac499-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818503 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818587 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/735a4b10-b520-4e48-8cd0-fd47615af04b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818641 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-config\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818699 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-service-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818720 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b48f63c-36d5-48ac-98c0-fe4313495425-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735a4b10-b520-4e48-8cd0-fd47615af04b-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-encryption-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818843 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818889 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818912 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjcz\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-kube-api-access-dqjcz\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-images\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.818985 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819017 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819080 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdclc\" (UniqueName: \"kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-audit-dir\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819119 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b48f63c-36d5-48ac-98c0-fe4313495425-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819140 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-client\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819161 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819181 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/14e13832-467f-4f02-9ded-be8ca6bc6ed2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-serving-cert\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819227 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxrw\" (UniqueName: \"kubernetes.io/projected/289de29e-7a1c-4076-9aa4-b829a2f9b004-kube-api-access-lxxrw\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba97d89e-7ec1-423e-b15a-a44253eac499-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-config\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-config\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819327 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68497d64-90d5-4346-aad5-abf525df6845-config\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819356 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819378 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819398 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-image-import-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68497d64-90d5-4346-aad5-abf525df6845-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819442 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819462 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289de29e-7a1c-4076-9aa4-b829a2f9b004-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819492 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819539 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss5qk\" (UniqueName: \"kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.819561 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: E0216 21:40:08.821443 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.321424437 +0000 UTC m=+141.974703428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821447 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1a69fa0-202e-42db-905c-8cc07f3ffa24-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821552 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68497d64-90d5-4346-aad5-abf525df6845-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-node-pullsecrets\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-client\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9rps\" (UniqueName: \"kubernetes.io/projected/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-kube-api-access-f9rps\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821764 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821854 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821912 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzw8\" (UniqueName: \"kubernetes.io/projected/735a4b10-b520-4e48-8cd0-fd47615af04b-kube-api-access-snzw8\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.821990 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blrm5\" (UniqueName: \"kubernetes.io/projected/ec33f265-8d79-4cf8-9565-ddc375565069-kube-api-access-blrm5\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822013 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsxn\" (UniqueName: \"kubernetes.io/projected/7f4cbae2-e549-4595-960c-8aacaca61776-kube-api-access-cqsxn\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822156 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwg7x\" (UniqueName: \"kubernetes.io/projected/a1a69fa0-202e-42db-905c-8cc07f3ffa24-kube-api-access-bwg7x\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822171 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-audit\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822231 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b48f63c-36d5-48ac-98c0-fe4313495425-config\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289de29e-7a1c-4076-9aa4-b829a2f9b004-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822304 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822339 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822406 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnjmp\" (UniqueName: \"kubernetes.io/projected/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-kube-api-access-qnjmp\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822423 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6bgt\" (UniqueName: \"kubernetes.io/projected/1fd5e410-68ff-42f7-a7fb-f138c0eff419-kube-api-access-c6bgt\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822478 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-serving-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822492 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4v2p\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1fd5e410-68ff-42f7-a7fb-f138c0eff419-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822651 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcccz\" (UniqueName: \"kubernetes.io/projected/14e13832-467f-4f02-9ded-be8ca6bc6ed2-kube-api-access-qcccz\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822665 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-serving-cert\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.822744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.832113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr"] Feb 16 21:40:08 crc kubenswrapper[4792]: W0216 21:40:08.836489 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0712775_7995_4058_9326_15ae6f90a6fe.slice/crio-e92ceb696709802b38e91bc71b09736aa3eed6d15edc7370637b13f18eeb102a WatchSource:0}: Error finding container e92ceb696709802b38e91bc71b09736aa3eed6d15edc7370637b13f18eeb102a: Status 404 returned error can't find the container with id e92ceb696709802b38e91bc71b09736aa3eed6d15edc7370637b13f18eeb102a Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.867985 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kvt2"] Feb 16 21:40:08 crc kubenswrapper[4792]: E0216 21:40:08.923732 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.423707543 +0000 UTC m=+142.076986434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.923578 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.924298 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.924325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blrm5\" (UniqueName: \"kubernetes.io/projected/ec33f265-8d79-4cf8-9565-ddc375565069-kube-api-access-blrm5\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.924386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.924411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsxn\" (UniqueName: \"kubernetes.io/projected/7f4cbae2-e549-4595-960c-8aacaca61776-kube-api-access-cqsxn\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5tbz\" (UniqueName: \"kubernetes.io/projected/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-kube-api-access-r5tbz\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925459 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwg7x\" (UniqueName: \"kubernetes.io/projected/a1a69fa0-202e-42db-905c-8cc07f3ffa24-kube-api-access-bwg7x\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925483 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1350e708-602a-4919-9178-424fc36b043b-proxy-tls\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jcn4\" (UniqueName: \"kubernetes.io/projected/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-kube-api-access-2jcn4\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jc2r\" (UniqueName: \"kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925550 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-cabundle\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925565 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-config\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925585 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289de29e-7a1c-4076-9aa4-b829a2f9b004-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-metrics-tls\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnjmp\" (UniqueName: \"kubernetes.io/projected/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-kube-api-access-qnjmp\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925842 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6bgt\" (UniqueName: \"kubernetes.io/projected/1fd5e410-68ff-42f7-a7fb-f138c0eff419-kube-api-access-c6bgt\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925899 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.925988 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926046 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4v2p\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1fd5e410-68ff-42f7-a7fb-f138c0eff419-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcccz\" (UniqueName: \"kubernetes.io/projected/14e13832-467f-4f02-9ded-be8ca6bc6ed2-kube-api-access-qcccz\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926268 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b306c2d-5380-4048-aac2-26c834e948cc-config\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-serving-cert\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-service-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba97d89e-7ec1-423e-b15a-a44253eac499-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.926978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-service-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-config\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927157 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-encryption-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b48f63c-36d5-48ac-98c0-fe4313495425-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927237 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tdp\" (UniqueName: \"kubernetes.io/projected/a78bbde7-7601-41dc-a9ef-a326cd6da349-kube-api-access-l5tdp\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjcz\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-kube-api-access-dqjcz\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927446 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75a747bf-419d-47c3-bd88-628deb937dc7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927747 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-images\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvp8\" (UniqueName: \"kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927882 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78qv\" (UniqueName: \"kubernetes.io/projected/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-kube-api-access-r78qv\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927906 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-client\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927933 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b48f63c-36d5-48ac-98c0-fe4313495425-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.927980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/14e13832-467f-4f02-9ded-be8ca6bc6ed2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.928004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-images\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.928081 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-config\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.929182 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.930051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-images\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.930457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.930930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-service-ca-bundle\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931066 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-service-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba97d89e-7ec1-423e-b15a-a44253eac499-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-mountpoint-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-csi-data-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-config\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931433 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f3f794e-3279-48fc-a684-e6d40fadd760-service-ca-bundle\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931473 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwwq\" (UniqueName: \"kubernetes.io/projected/acfdd228-16ae-48f8-9737-c57e42024344-kube-api-access-6bwwq\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1fd5e410-68ff-42f7-a7fb-f138c0eff419-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.931837 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68497d64-90d5-4346-aad5-abf525df6845-config\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14e13832-467f-4f02-9ded-be8ca6bc6ed2-config\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932404 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: E0216 21:40:08.932464 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.432449912 +0000 UTC m=+142.085728903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932492 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68497d64-90d5-4346-aad5-abf525df6845-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba97d89e-7ec1-423e-b15a-a44253eac499-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-tmpfs\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932678 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289de29e-7a1c-4076-9aa4-b829a2f9b004-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-serving-cert\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932730 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhht\" (UniqueName: \"kubernetes.io/projected/5763ee94-31ba-43bf-8aaa-c943fa59c080-kube-api-access-8lhht\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68497d64-90d5-4346-aad5-abf525df6845-config\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932902 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss5qk\" (UniqueName: \"kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.932998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-webhook-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n55v6\" (UniqueName: \"kubernetes.io/projected/156ded60-abce-4ec4-912b-cbfece0f8d30-kube-api-access-n55v6\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933049 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b32c7a47-9e78-4732-a919-4cb62dc13f06-trusted-ca\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhqh\" (UniqueName: \"kubernetes.io/projected/2b306c2d-5380-4048-aac2-26c834e948cc-kube-api-access-skhqh\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933101 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1a69fa0-202e-42db-905c-8cc07f3ffa24-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933128 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68497d64-90d5-4346-aad5-abf525df6845-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933144 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933165 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-client\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933191 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w242t\" (UniqueName: \"kubernetes.io/projected/c554cead-1e24-4255-9682-6a0ddb6e54b6-kube-api-access-w242t\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933218 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snzw8\" (UniqueName: \"kubernetes.io/projected/735a4b10-b520-4e48-8cd0-fd47615af04b-kube-api-access-snzw8\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p7f\" (UniqueName: \"kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933335 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-bound-sa-token\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933381 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-audit\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933430 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-apiservice-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933454 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-socket-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4tz\" (UniqueName: \"kubernetes.io/projected/6603585f-6685-44a6-b3c8-1e938e10cbb4-kube-api-access-2l4tz\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b48f63c-36d5-48ac-98c0-fe4313495425-config\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933530 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acfdd228-16ae-48f8-9737-c57e42024344-cert\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.933565 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-plugins-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934543 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934568 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934632 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-serving-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-metrics-certs\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934901 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2lf\" (UniqueName: \"kubernetes.io/projected/1f3f794e-3279-48fc-a684-e6d40fadd760-kube-api-access-7g2lf\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-certs\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.934978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935005 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-serving-cert\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935052 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5r5z\" (UniqueName: \"kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935129 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75a747bf-419d-47c3-bd88-628deb937dc7-proxy-tls\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-stats-auth\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/735a4b10-b520-4e48-8cd0-fd47615af04b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935423 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6n5b\" (UniqueName: \"kubernetes.io/projected/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-kube-api-access-p6n5b\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-client\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735a4b10-b520-4e48-8cd0-fd47615af04b-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935519 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78bbde7-7601-41dc-a9ef-a326cd6da349-serving-cert\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935626 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-default-certificate\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935655 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb6sm\" (UniqueName: \"kubernetes.io/projected/3e236ddc-88ad-474a-b7c2-ada364746f6d-kube-api-access-gb6sm\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935679 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935730 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqhv\" (UniqueName: \"kubernetes.io/projected/1350e708-602a-4919-9178-424fc36b043b-kube-api-access-bqqhv\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-srv-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935791 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-srv-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935947 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.935970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936023 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936010 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b48f63c-36d5-48ac-98c0-fe4313495425-config\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c554cead-1e24-4255-9682-6a0ddb6e54b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbrl\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-kube-api-access-8qbrl\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-audit-dir\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936209 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdclc\" (UniqueName: \"kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqwfk\" (UniqueName: \"kubernetes.io/projected/85aa40ba-6873-4c3d-9396-760b4597d183-kube-api-access-kqwfk\") pod \"migrator-59844c95c7-sshb4\" (UID: \"85aa40ba-6873-4c3d-9396-760b4597d183\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-serving-cert\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxrw\" (UniqueName: \"kubernetes.io/projected/289de29e-7a1c-4076-9aa4-b829a2f9b004-kube-api-access-lxxrw\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936405 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-etcd-ca\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289de29e-7a1c-4076-9aa4-b829a2f9b004-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b32c7a47-9e78-4732-a919-4cb62dc13f06-metrics-tls\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936672 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-key\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936735 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-config\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936766 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-node-bootstrap-token\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936821 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936849 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b306c2d-5380-4048-aac2-26c834e948cc-serving-cert\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936905 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgwr\" (UniqueName: \"kubernetes.io/projected/75a747bf-419d-47c3-bd88-628deb937dc7-kube-api-access-mwgwr\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.936978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-image-import-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937018 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-config-volume\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937042 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937178 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-audit-dir\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937183 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-registration-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-node-pullsecrets\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937306 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/14e13832-467f-4f02-9ded-be8ca6bc6ed2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937316 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9rps\" (UniqueName: \"kubernetes.io/projected/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-kube-api-access-f9rps\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-trusted-ca\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.937837 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-serving-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.938216 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec33f265-8d79-4cf8-9565-ddc375565069-node-pullsecrets\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.938448 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.938950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.939243 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba97d89e-7ec1-423e-b15a-a44253eac499-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.939439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/735a4b10-b520-4e48-8cd0-fd47615af04b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.939553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.939861 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.939877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-audit\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.940347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.940419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68497d64-90d5-4346-aad5-abf525df6845-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.940787 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-etcd-client\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.940852 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-encryption-config\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.940944 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.941350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.941357 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.941434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.941490 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ec33f265-8d79-4cf8-9565-ddc375565069-image-import-ca\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.941766 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4cbae2-e549-4595-960c-8aacaca61776-config\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.942054 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.942716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.942814 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289de29e-7a1c-4076-9aa4-b829a2f9b004-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.942883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.943316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b48f63c-36d5-48ac-98c0-fe4313495425-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.943644 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-serving-cert\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.943732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec33f265-8d79-4cf8-9565-ddc375565069-serving-cert\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.944202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1a69fa0-202e-42db-905c-8cc07f3ffa24-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.945099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.948067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.948114 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.948421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/735a4b10-b520-4e48-8cd0-fd47615af04b-serving-cert\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.949540 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.949852 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.979723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blrm5\" (UniqueName: \"kubernetes.io/projected/ec33f265-8d79-4cf8-9565-ddc375565069-kube-api-access-blrm5\") pod \"apiserver-76f77b778f-5jwvl\" (UID: \"ec33f265-8d79-4cf8-9565-ddc375565069\") " pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.987656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:08 crc kubenswrapper[4792]: I0216 21:40:08.995535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsxn\" (UniqueName: \"kubernetes.io/projected/7f4cbae2-e549-4595-960c-8aacaca61776-kube-api-access-cqsxn\") pod \"etcd-operator-b45778765-sn4zb\" (UID: \"7f4cbae2-e549-4595-960c-8aacaca61776\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.018524 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.023337 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwg7x\" (UniqueName: \"kubernetes.io/projected/a1a69fa0-202e-42db-905c-8cc07f3ffa24-kube-api-access-bwg7x\") pod \"cluster-samples-operator-665b6dd947-xhqxb\" (UID: \"a1a69fa0-202e-42db-905c-8cc07f3ffa24\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.033867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6bgt\" (UniqueName: \"kubernetes.io/projected/1fd5e410-68ff-42f7-a7fb-f138c0eff419-kube-api-access-c6bgt\") pod \"multus-admission-controller-857f4d67dd-bnsxs\" (UID: \"1fd5e410-68ff-42f7-a7fb-f138c0eff419\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038408 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038566 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-images\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038621 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-mountpoint-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038639 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-csi-data-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f3f794e-3279-48fc-a684-e6d40fadd760-service-ca-bundle\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bwwq\" (UniqueName: \"kubernetes.io/projected/acfdd228-16ae-48f8-9737-c57e42024344-kube-api-access-6bwwq\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-tmpfs\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lhht\" (UniqueName: \"kubernetes.io/projected/5763ee94-31ba-43bf-8aaa-c943fa59c080-kube-api-access-8lhht\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038757 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-webhook-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038793 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n55v6\" (UniqueName: \"kubernetes.io/projected/156ded60-abce-4ec4-912b-cbfece0f8d30-kube-api-access-n55v6\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038808 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b32c7a47-9e78-4732-a919-4cb62dc13f06-trusted-ca\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhqh\" (UniqueName: \"kubernetes.io/projected/2b306c2d-5380-4048-aac2-26c834e948cc-kube-api-access-skhqh\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038843 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w242t\" (UniqueName: \"kubernetes.io/projected/c554cead-1e24-4255-9682-6a0ddb6e54b6-kube-api-access-w242t\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p7f\" (UniqueName: \"kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038879 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038919 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-bound-sa-token\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038933 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-apiservice-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-socket-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038963 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4tz\" (UniqueName: \"kubernetes.io/projected/6603585f-6685-44a6-b3c8-1e938e10cbb4-kube-api-access-2l4tz\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acfdd228-16ae-48f8-9737-c57e42024344-cert\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.038995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-plugins-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039009 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-metrics-certs\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039057 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2lf\" (UniqueName: \"kubernetes.io/projected/1f3f794e-3279-48fc-a684-e6d40fadd760-kube-api-access-7g2lf\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039071 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-certs\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75a747bf-419d-47c3-bd88-628deb937dc7-proxy-tls\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-stats-auth\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039134 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6n5b\" (UniqueName: \"kubernetes.io/projected/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-kube-api-access-p6n5b\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039158 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78bbde7-7601-41dc-a9ef-a326cd6da349-serving-cert\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-default-certificate\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039189 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb6sm\" (UniqueName: \"kubernetes.io/projected/3e236ddc-88ad-474a-b7c2-ada364746f6d-kube-api-access-gb6sm\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqhv\" (UniqueName: \"kubernetes.io/projected/1350e708-602a-4919-9178-424fc36b043b-kube-api-access-bqqhv\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-srv-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-srv-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039270 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c554cead-1e24-4255-9682-6a0ddb6e54b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbrl\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-kube-api-access-8qbrl\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqwfk\" (UniqueName: \"kubernetes.io/projected/85aa40ba-6873-4c3d-9396-760b4597d183-kube-api-access-kqwfk\") pod \"migrator-59844c95c7-sshb4\" (UID: \"85aa40ba-6873-4c3d-9396-760b4597d183\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b32c7a47-9e78-4732-a919-4cb62dc13f06-metrics-tls\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039340 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-key\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-node-bootstrap-token\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b306c2d-5380-4048-aac2-26c834e948cc-serving-cert\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039387 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwgwr\" (UniqueName: \"kubernetes.io/projected/75a747bf-419d-47c3-bd88-628deb937dc7-kube-api-access-mwgwr\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-config-volume\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039420 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039436 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-registration-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039464 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-trusted-ca\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tbz\" (UniqueName: \"kubernetes.io/projected/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-kube-api-access-r5tbz\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1350e708-602a-4919-9178-424fc36b043b-proxy-tls\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039526 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jcn4\" (UniqueName: \"kubernetes.io/projected/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-kube-api-access-2jcn4\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.039870 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.539847464 +0000 UTC m=+142.193126425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.040521 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-images\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.040612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-mountpoint-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.040684 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-csi-data-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1350e708-602a-4919-9178-424fc36b043b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.039542 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jc2r\" (UniqueName: \"kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041397 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f3f794e-3279-48fc-a684-e6d40fadd760-service-ca-bundle\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-cabundle\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-config\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041457 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-metrics-tls\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041475 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041499 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b306c2d-5380-4048-aac2-26c834e948cc-config\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tdp\" (UniqueName: \"kubernetes.io/projected/a78bbde7-7601-41dc-a9ef-a326cd6da349-kube-api-access-l5tdp\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041737 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75a747bf-419d-47c3-bd88-628deb937dc7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041756 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvp8\" (UniqueName: \"kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r78qv\" (UniqueName: \"kubernetes.io/projected/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-kube-api-access-r78qv\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.041911 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-tmpfs\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.043102 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.043334 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-certs\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.043940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-plugins-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.044030 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-socket-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.045882 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-registration-dir\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.046592 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.046649 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.046900 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-trusted-ca\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.048183 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b32c7a47-9e78-4732-a919-4cb62dc13f06-trusted-ca\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.048311 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acfdd228-16ae-48f8-9737-c57e42024344-cert\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.048852 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.049120 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-cabundle\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.049378 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.049840 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1350e708-602a-4919-9178-424fc36b043b-proxy-tls\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.050404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b306c2d-5380-4048-aac2-26c834e948cc-config\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.050646 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75a747bf-419d-47c3-bd88-628deb937dc7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.050892 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-stats-auth\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.051771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78bbde7-7601-41dc-a9ef-a326cd6da349-config\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.051938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-webhook-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.051958 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-config-volume\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.053641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-metrics-certs\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.053777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.054317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.054336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b32c7a47-9e78-4732-a919-4cb62dc13f06-metrics-tls\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.054654 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6603585f-6685-44a6-b3c8-1e938e10cbb4-signing-key\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.054730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78bbde7-7601-41dc-a9ef-a326cd6da349-serving-cert\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.055205 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75a747bf-419d-47c3-bd88-628deb937dc7-proxy-tls\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.055214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c554cead-1e24-4255-9682-6a0ddb6e54b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.055554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.055665 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-srv-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.055720 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5763ee94-31ba-43bf-8aaa-c943fa59c080-node-bootstrap-token\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.056031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-metrics-tls\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.056196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.056993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f3f794e-3279-48fc-a684-e6d40fadd760-default-certificate\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.057298 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-apiservice-cert\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.057496 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e236ddc-88ad-474a-b7c2-ada364746f6d-srv-cert\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.057930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/156ded60-abce-4ec4-912b-cbfece0f8d30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.059270 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.060095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcccz\" (UniqueName: \"kubernetes.io/projected/14e13832-467f-4f02-9ded-be8ca6bc6ed2-kube-api-access-qcccz\") pod \"machine-api-operator-5694c8668f-ncn6b\" (UID: \"14e13832-467f-4f02-9ded-be8ca6bc6ed2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.060617 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b306c2d-5380-4048-aac2-26c834e948cc-serving-cert\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.074534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.101884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnjmp\" (UniqueName: \"kubernetes.io/projected/1c9dbe72-a988-4a19-ae1b-b849c040a6c7-kube-api-access-qnjmp\") pod \"kube-storage-version-migrator-operator-b67b599dd-ddgfq\" (UID: \"1c9dbe72-a988-4a19-ae1b-b849c040a6c7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.118208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4v2p\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.125428 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.131093 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.142050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjcz\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-kube-api-access-dqjcz\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.142546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.143156 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.643143619 +0000 UTC m=+142.296422510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.160024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b48f63c-36d5-48ac-98c0-fe4313495425-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cr85f\" (UID: \"4b48f63c-36d5-48ac-98c0-fe4313495425\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.177165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68497d64-90d5-4346-aad5-abf525df6845-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-snd9g\" (UID: \"68497d64-90d5-4346-aad5-abf525df6845\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.194430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss5qk\" (UniqueName: \"kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk\") pod \"console-f9d7485db-tr7np\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.198558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.215877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzw8\" (UniqueName: \"kubernetes.io/projected/735a4b10-b520-4e48-8cd0-fd47615af04b-kube-api-access-snzw8\") pod \"openshift-config-operator-7777fb866f-b9fln\" (UID: \"735a4b10-b520-4e48-8cd0-fd47615af04b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.221197 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.223571 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5jwvl"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.238688 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.240095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5r5z\" (UniqueName: \"kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z\") pod \"oauth-openshift-558db77b4-jx4dt\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.244330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.244469 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.744441997 +0000 UTC m=+142.397720888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.244655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.245152 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.745137977 +0000 UTC m=+142.398416878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.246467 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.253683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.256498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxrw\" (UniqueName: \"kubernetes.io/projected/289de29e-7a1c-4076-9aa4-b829a2f9b004-kube-api-access-lxxrw\") pod \"openshift-controller-manager-operator-756b6f6bc6-t8gt4\" (UID: \"289de29e-7a1c-4076-9aa4-b829a2f9b004\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.276148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba97d89e-7ec1-423e-b15a-a44253eac499-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8mwwl\" (UID: \"ba97d89e-7ec1-423e-b15a-a44253eac499\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.297340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdclc\" (UniqueName: \"kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc\") pod \"controller-manager-879f6c89f-nwvtk\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.302559 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.315974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9rps\" (UniqueName: \"kubernetes.io/projected/ae258fd6-b8cc-4fe1-82f3-0717b513d66a-kube-api-access-f9rps\") pod \"authentication-operator-69f744f599-g67z5\" (UID: \"ae258fd6-b8cc-4fe1-82f3-0717b513d66a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.353163 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.353995 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.85398013 +0000 UTC m=+142.507259021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.364540 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sn4zb"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.373974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2lf\" (UniqueName: \"kubernetes.io/projected/1f3f794e-3279-48fc-a684-e6d40fadd760-kube-api-access-7g2lf\") pod \"router-default-5444994796-2k2ct\" (UID: \"1f3f794e-3279-48fc-a684-e6d40fadd760\") " pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.376414 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhqh\" (UniqueName: \"kubernetes.io/projected/2b306c2d-5380-4048-aac2-26c834e948cc-kube-api-access-skhqh\") pod \"service-ca-operator-777779d784-wdk54\" (UID: \"2b306c2d-5380-4048-aac2-26c834e948cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.389592 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.392807 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w242t\" (UniqueName: \"kubernetes.io/projected/c554cead-1e24-4255-9682-6a0ddb6e54b6-kube-api-access-w242t\") pod \"package-server-manager-789f6589d5-mpskb\" (UID: \"c554cead-1e24-4255-9682-6a0ddb6e54b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.416909 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p7f\" (UniqueName: \"kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f\") pod \"collect-profiles-29521290-7nbqg\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.439201 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqwfk\" (UniqueName: \"kubernetes.io/projected/85aa40ba-6873-4c3d-9396-760b4597d183-kube-api-access-kqwfk\") pod \"migrator-59844c95c7-sshb4\" (UID: \"85aa40ba-6873-4c3d-9396-760b4597d183\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.441451 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.455730 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.456455 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.458963 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:09.958943872 +0000 UTC m=+142.612222843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.459902 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l4tz\" (UniqueName: \"kubernetes.io/projected/6603585f-6685-44a6-b3c8-1e938e10cbb4-kube-api-access-2l4tz\") pod \"service-ca-9c57cc56f-xcvfd\" (UID: \"6603585f-6685-44a6-b3c8-1e938e10cbb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.468970 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.491347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bwwq\" (UniqueName: \"kubernetes.io/projected/acfdd228-16ae-48f8-9737-c57e42024344-kube-api-access-6bwwq\") pod \"ingress-canary-z8w5w\" (UID: \"acfdd228-16ae-48f8-9737-c57e42024344\") " pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.495284 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.504203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r78qv\" (UniqueName: \"kubernetes.io/projected/5d2adadd-eb49-4e47-bd5d-30b77fbbe635-kube-api-access-r78qv\") pod \"packageserver-d55dfcdfc-6grsl\" (UID: \"5d2adadd-eb49-4e47-bd5d-30b77fbbe635\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.505564 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.516739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lhht\" (UniqueName: \"kubernetes.io/projected/5763ee94-31ba-43bf-8aaa-c943fa59c080-kube-api-access-8lhht\") pod \"machine-config-server-9zpgg\" (UID: \"5763ee94-31ba-43bf-8aaa-c943fa59c080\") " pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.513608 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" Feb 16 21:40:09 crc kubenswrapper[4792]: W0216 21:40:09.518957 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae243370_753c_48cb_b885_b4bf62dd55ef.slice/crio-20a6657a3e57b1a45009c81520001761880fd37d6b7fa5d1089235f17867d265 WatchSource:0}: Error finding container 20a6657a3e57b1a45009c81520001761880fd37d6b7fa5d1089235f17867d265: Status 404 returned error can't find the container with id 20a6657a3e57b1a45009c81520001761880fd37d6b7fa5d1089235f17867d265 Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.533936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jcn4\" (UniqueName: \"kubernetes.io/projected/fe6870e6-fb04-4e82-ac5a-f23d225cad7a-kube-api-access-2jcn4\") pod \"csi-hostpathplugin-v962t\" (UID: \"fe6870e6-fb04-4e82-ac5a-f23d225cad7a\") " pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.556176 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n55v6\" (UniqueName: \"kubernetes.io/projected/156ded60-abce-4ec4-912b-cbfece0f8d30-kube-api-access-n55v6\") pod \"olm-operator-6b444d44fb-rjrpc\" (UID: \"156ded60-abce-4ec4-912b-cbfece0f8d30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.559132 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.559690 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.059674894 +0000 UTC m=+142.712953785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.567167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.576913 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.577159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tbz\" (UniqueName: \"kubernetes.io/projected/7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8-kube-api-access-r5tbz\") pod \"dns-default-hjb5c\" (UID: \"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8\") " pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.594153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqhv\" (UniqueName: \"kubernetes.io/projected/1350e708-602a-4919-9178-424fc36b043b-kube-api-access-bqqhv\") pod \"machine-config-operator-74547568cd-wdfb6\" (UID: \"1350e708-602a-4919-9178-424fc36b043b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.599811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.616427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.621160 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.625207 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6n5b\" (UniqueName: \"kubernetes.io/projected/6e2d2b51-afe4-44d1-9c18-0bcef522d6dd-kube-api-access-p6n5b\") pod \"control-plane-machine-set-operator-78cbb6b69f-6btrx\" (UID: \"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.629577 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.649373 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.649914 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.650739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jc2r\" (UniqueName: \"kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r\") pod \"route-controller-manager-6576b87f9c-r7nkn\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.651847 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bnsxs"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.661127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.661439 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.161427255 +0000 UTC m=+142.814706146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.666253 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.672815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbrl\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-kube-api-access-8qbrl\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.682976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb6sm\" (UniqueName: \"kubernetes.io/projected/3e236ddc-88ad-474a-b7c2-ada364746f6d-kube-api-access-gb6sm\") pod \"catalog-operator-68c6474976-hlxg6\" (UID: \"3e236ddc-88ad-474a-b7c2-ada364746f6d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.686609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.696856 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.701137 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tdp\" (UniqueName: \"kubernetes.io/projected/a78bbde7-7601-41dc-a9ef-a326cd6da349-kube-api-access-l5tdp\") pod \"console-operator-58897d9998-72gf6\" (UID: \"a78bbde7-7601-41dc-a9ef-a326cd6da349\") " pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.717834 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b32c7a47-9e78-4732-a919-4cb62dc13f06-bound-sa-token\") pod \"ingress-operator-5b745b69d9-97jgh\" (UID: \"b32c7a47-9e78-4732-a919-4cb62dc13f06\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.724923 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v962t" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.736324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvp8\" (UniqueName: \"kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8\") pod \"marketplace-operator-79b997595-ss6x2\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.736558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zpgg" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.747944 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z8w5w" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.753391 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ncn6b"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.753791 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.754753 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" event={"ID":"ec33f265-8d79-4cf8-9565-ddc375565069","Type":"ContainerStarted","Data":"8f3921555e9f46f81ad89414c96f21c9b85c90f805fdfecabecd40bfa9e22df4"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.755962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" event={"ID":"8cb00c52-ac92-41bb-8b6a-08d31f4932cb","Type":"ContainerStarted","Data":"5d88721822dc106e05715448f405adfc5abb6ad073eeb70f960a3c693fba0b68"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.755984 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" event={"ID":"8cb00c52-ac92-41bb-8b6a-08d31f4932cb","Type":"ContainerStarted","Data":"bb89db2ba3e0d065e8ed58d20c3da391a9aa82c8bd31e48cb8909b4fa07cae89"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.757386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" event={"ID":"7f4cbae2-e549-4595-960c-8aacaca61776","Type":"ContainerStarted","Data":"a4ea6d2afdb3fc822156331c970b6618c497a5f645ff34c3be3086608ffbdb77"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.759476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gd457" event={"ID":"5e2db923-4a84-4a7d-8507-065f4920080d","Type":"ContainerStarted","Data":"ac036222719fc7806477bf1a194c073a1468482c5f15384037418f1cdf33b09d"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.759498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gd457" event={"ID":"5e2db923-4a84-4a7d-8507-065f4920080d","Type":"ContainerStarted","Data":"7a3bb96619a147ee4eef33502091738bcf05ae62679e1e368fb3ad54cc04bc5c"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.760145 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.760977 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwgwr\" (UniqueName: \"kubernetes.io/projected/75a747bf-419d-47c3-bd88-628deb937dc7-kube-api-access-mwgwr\") pod \"machine-config-controller-84d6567774-zppvn\" (UID: \"75a747bf-419d-47c3-bd88-628deb937dc7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.761458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.761788 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.261776897 +0000 UTC m=+142.915055788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.765914 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-gd457 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.765949 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gd457" podUID="5e2db923-4a84-4a7d-8507-065f4920080d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.766364 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tr7np" event={"ID":"ae243370-753c-48cb-b885-b4bf62dd55ef","Type":"ContainerStarted","Data":"20a6657a3e57b1a45009c81520001761880fd37d6b7fa5d1089235f17867d265"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.799135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" event={"ID":"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8","Type":"ContainerStarted","Data":"3253c614d66fc095cdd85c7691bc053f242d657559d1dc8f0078463e05b2b51d"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.799195 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" event={"ID":"5f3c5727-093c-443f-aac8-dd7f2e5ab7f8","Type":"ContainerStarted","Data":"3b0018e30e2ca2498d134fbb69f001c8fe26322d8b3f9dc109bf14949a6b702b"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.802870 4792 generic.go:334] "Generic (PLEG): container finished" podID="59a735fb-20bd-48e7-9c0c-f79fe28c6190" containerID="2fc1ffb545c90bff6b7fd9add9b21a755e256b02cef2d763ec2262ed7a4472fe" exitCode=0 Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.802934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" event={"ID":"59a735fb-20bd-48e7-9c0c-f79fe28c6190","Type":"ContainerDied","Data":"2fc1ffb545c90bff6b7fd9add9b21a755e256b02cef2d763ec2262ed7a4472fe"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.802968 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" event={"ID":"59a735fb-20bd-48e7-9c0c-f79fe28c6190","Type":"ContainerStarted","Data":"c889a93870026f9075ff4339db80f45b97bb1d92c96c47e2684a463f6286bc13"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.817826 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" event={"ID":"e0712775-7995-4058-9326-15ae6f90a6fe","Type":"ContainerStarted","Data":"0b6a0246a76d4573cd15137f8f1202ce88102d029cf799afce8833c51c31e9d0"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.817907 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" event={"ID":"e0712775-7995-4058-9326-15ae6f90a6fe","Type":"ContainerStarted","Data":"e92ceb696709802b38e91bc71b09736aa3eed6d15edc7370637b13f18eeb102a"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.836897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" event={"ID":"b3f992b5-86f2-4dff-b132-b7b22e6e9629","Type":"ContainerStarted","Data":"2f451d19fd8904bd614725b6e9df199785917394bb00253e0de4419a51b2faea"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.836937 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" event={"ID":"b3f992b5-86f2-4dff-b132-b7b22e6e9629","Type":"ContainerStarted","Data":"36d662c71446d5a4941be69a0fa5fb454d25f83355acfd4b74262ce041becdb9"} Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.854006 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.868270 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.869412 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.869973 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.369961791 +0000 UTC m=+143.023240672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.884633 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.891557 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.907094 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.937316 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.950059 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g"] Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.960934 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.970386 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:09 crc kubenswrapper[4792]: E0216 21:40:09.970783 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.470761424 +0000 UTC m=+143.124040315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:09 crc kubenswrapper[4792]: I0216 21:40:09.976959 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.071387 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.071692 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.571680522 +0000 UTC m=+143.224959413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.121837 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.172713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.173103 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.673087672 +0000 UTC m=+143.326366563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.204246 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b9fln"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.226799 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.278968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.279463 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.779447335 +0000 UTC m=+143.432726226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.290400 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.297135 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.362094 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g67z5"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.380035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.380411 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.880391874 +0000 UTC m=+143.533670765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.420803 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb35cffd_4266_41df_89cc_d136fd0f6954.slice/crio-3cca53dd5c9c47745c3ed6d739134568c13e777c2e19b94323bf36e1ec73be70 WatchSource:0}: Error finding container 3cca53dd5c9c47745c3ed6d739134568c13e777c2e19b94323bf36e1ec73be70: Status 404 returned error can't find the container with id 3cca53dd5c9c47745c3ed6d739134568c13e777c2e19b94323bf36e1ec73be70 Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.426201 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.432638 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdk54"] Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.438552 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba97d89e_7ec1_423e_b15a_a44253eac499.slice/crio-060f13dbb32ebcc80155c0613bc7cca9281a13568e4d95e6edba8def2c86e16f WatchSource:0}: Error finding container 060f13dbb32ebcc80155c0613bc7cca9281a13568e4d95e6edba8def2c86e16f: Status 404 returned error can't find the container with id 060f13dbb32ebcc80155c0613bc7cca9281a13568e4d95e6edba8def2c86e16f Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.481894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.482228 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:10.982214896 +0000 UTC m=+143.635493787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.549040 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod289de29e_7a1c_4076_9aa4_b829a2f9b004.slice/crio-bfe03efae23435aea46bed1b4d331343c4e5e011b337a8765df8796676dae8fc WatchSource:0}: Error finding container bfe03efae23435aea46bed1b4d331343c4e5e011b337a8765df8796676dae8fc: Status 404 returned error can't find the container with id bfe03efae23435aea46bed1b4d331343c4e5e011b337a8765df8796676dae8fc Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.559895 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b306c2d_5380_4048_aac2_26c834e948cc.slice/crio-433602ad85bcef99a48de3ac8dc03710c8fe0ecfd42f0f67bb323501b93cc9ca WatchSource:0}: Error finding container 433602ad85bcef99a48de3ac8dc03710c8fe0ecfd42f0f67bb323501b93cc9ca: Status 404 returned error can't find the container with id 433602ad85bcef99a48de3ac8dc03710c8fe0ecfd42f0f67bb323501b93cc9ca Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.582789 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.582936 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.082915367 +0000 UTC m=+143.736194258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.583073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.583362 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.083343939 +0000 UTC m=+143.736622830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.583417 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb"] Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.629335 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5763ee94_31ba_43bf_8aaa_c943fa59c080.slice/crio-da3caf952ac8434acf7c0d0afd650d9e23fa316431d7cefcf0921f3c703238ea WatchSource:0}: Error finding container da3caf952ac8434acf7c0d0afd650d9e23fa316431d7cefcf0921f3c703238ea: Status 404 returned error can't find the container with id da3caf952ac8434acf7c0d0afd650d9e23fa316431d7cefcf0921f3c703238ea Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.669301 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z8w5w"] Feb 16 21:40:10 crc kubenswrapper[4792]: W0216 21:40:10.674210 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc554cead_1e24_4255_9682_6a0ddb6e54b6.slice/crio-44eea4b2bc204b1d789073ebd1824bf57454a2e17efb10f29aa5f2517a0fa2db WatchSource:0}: Error finding container 44eea4b2bc204b1d789073ebd1824bf57454a2e17efb10f29aa5f2517a0fa2db: Status 404 returned error can't find the container with id 44eea4b2bc204b1d789073ebd1824bf57454a2e17efb10f29aa5f2517a0fa2db Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.686459 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.688142 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.688696 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xcvfd"] Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.688726 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.188707573 +0000 UTC m=+143.841986464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.744849 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.756855 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-72gf6"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.773478 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.777847 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.785756 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.799067 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gd457" podStartSLOduration=118.79905179 podStartE2EDuration="1m58.79905179s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:10.798269757 +0000 UTC m=+143.451548648" watchObservedRunningTime="2026-02-16 21:40:10.79905179 +0000 UTC m=+143.452330681" Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.799863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.800166 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.30015395 +0000 UTC m=+143.953432841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.845162 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" event={"ID":"ae258fd6-b8cc-4fe1-82f3-0717b513d66a","Type":"ContainerStarted","Data":"d740e79cac7123b9616ae52e4990c313fbee4d4f801d4e94c452c1ace29e7271"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.855485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" event={"ID":"a1a69fa0-202e-42db-905c-8cc07f3ffa24","Type":"ContainerStarted","Data":"fd6ab9630659a04a31fd933a91722441f6d7445f2df355381b9a996b6d3afa02"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.860640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zpgg" event={"ID":"5763ee94-31ba-43bf-8aaa-c943fa59c080","Type":"ContainerStarted","Data":"da3caf952ac8434acf7c0d0afd650d9e23fa316431d7cefcf0921f3c703238ea"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.876149 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z8w5w" event={"ID":"acfdd228-16ae-48f8-9737-c57e42024344","Type":"ContainerStarted","Data":"1c03ddef62112d89c900e043c4ad57b12e2530ece6ceb29fa9aaea7d1688bf19"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.888820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" event={"ID":"74c00cd5-2613-4930-9091-9061ea9496bf","Type":"ContainerStarted","Data":"2da85a859e3e94895d90d7b5acd75291707c11b0893e35024b78dba4c827835a"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.895053 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" event={"ID":"ba97d89e-7ec1-423e-b15a-a44253eac499","Type":"ContainerStarted","Data":"060f13dbb32ebcc80155c0613bc7cca9281a13568e4d95e6edba8def2c86e16f"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.901848 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:10 crc kubenswrapper[4792]: E0216 21:40:10.902488 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.402465737 +0000 UTC m=+144.055744628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.910383 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.918706 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec33f265-8d79-4cf8-9565-ddc375565069" containerID="0323b6672a2e3d7c1ff2d20e03df1d591debe19236eab8e4e39a34b71671072a" exitCode=0 Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.918754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" event={"ID":"ec33f265-8d79-4cf8-9565-ddc375565069","Type":"ContainerDied","Data":"0323b6672a2e3d7c1ff2d20e03df1d591debe19236eab8e4e39a34b71671072a"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.936847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" event={"ID":"1fd5e410-68ff-42f7-a7fb-f138c0eff419","Type":"ContainerStarted","Data":"9759bc67eef74e6bf7c112c4a45739e8317de825040ac872ab1a8bb61b70eb10"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.963238 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" event={"ID":"4b48f63c-36d5-48ac-98c0-fe4313495425","Type":"ContainerStarted","Data":"1b1eb5b67771ab1f8b3e4785c62d89d73f77c2802ebc1b8ae0c950036b672240"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.965680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" event={"ID":"289de29e-7a1c-4076-9aa4-b829a2f9b004","Type":"ContainerStarted","Data":"bfe03efae23435aea46bed1b4d331343c4e5e011b337a8765df8796676dae8fc"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.975082 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.976186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" event={"ID":"68497d64-90d5-4346-aad5-abf525df6845","Type":"ContainerStarted","Data":"55f305c7d405da6cd015695e8cd83909eb2ec375b3023fb2fe25cdd0e570a5fe"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.978836 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hjb5c"] Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.979412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" event={"ID":"7f4cbae2-e549-4595-960c-8aacaca61776","Type":"ContainerStarted","Data":"301e73e1b74336c76fdc1ff3847d1df312cbf73ce4f71b2965b8143b2a93e1e6"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.994862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" event={"ID":"eb35cffd-4266-41df-89cc-d136fd0f6954","Type":"ContainerStarted","Data":"3cca53dd5c9c47745c3ed6d739134568c13e777c2e19b94323bf36e1ec73be70"} Feb 16 21:40:10 crc kubenswrapper[4792]: I0216 21:40:10.998203 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" event={"ID":"e0712775-7995-4058-9326-15ae6f90a6fe","Type":"ContainerStarted","Data":"efaa59bbcd4f9b3c313bad0c345942b051284445acbd0288ff3c004de884c588"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.003134 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.005872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" event={"ID":"6603585f-6685-44a6-b3c8-1e938e10cbb4","Type":"ContainerStarted","Data":"53855085c68c8affdc73b1813cf8de8f496d3f092ba221662e3141372019e54b"} Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.006078 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.506068201 +0000 UTC m=+144.159347092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.008748 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" event={"ID":"2b306c2d-5380-4048-aac2-26c834e948cc","Type":"ContainerStarted","Data":"433602ad85bcef99a48de3ac8dc03710c8fe0ecfd42f0f67bb323501b93cc9ca"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.010957 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4f6mr" podStartSLOduration=119.010915109 podStartE2EDuration="1m59.010915109s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:10.992294629 +0000 UTC m=+143.645573510" watchObservedRunningTime="2026-02-16 21:40:11.010915109 +0000 UTC m=+143.664194000" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.015295 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" event={"ID":"1c9dbe72-a988-4a19-ae1b-b849c040a6c7","Type":"ContainerStarted","Data":"7aebdd23c042a8f3a6f59351a42b44af19242e231cbf5f702aaf8de2d8045e0a"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.015327 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" event={"ID":"1c9dbe72-a988-4a19-ae1b-b849c040a6c7","Type":"ContainerStarted","Data":"c8cb3f647a928d9be49abc308a0b1d331c6d8d23c343b6f7614a3d7ca071bf8e"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.016715 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" event={"ID":"85aa40ba-6873-4c3d-9396-760b4597d183","Type":"ContainerStarted","Data":"2642c2c707a7af53cbd258d720a9582c2e146f31684a53be1ebc0ea1ac7899e9"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.020634 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" event={"ID":"14e13832-467f-4f02-9ded-be8ca6bc6ed2","Type":"ContainerStarted","Data":"ff8114c9556dd47225283c7b980c86870abb49d5ec0c7c6bc96d4091f343c60d"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.020680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" event={"ID":"14e13832-467f-4f02-9ded-be8ca6bc6ed2","Type":"ContainerStarted","Data":"375d3f3a483fafb9389dee1e8c456794eee4f388519dd00f41510bb5923d108b"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.034750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" event={"ID":"b3f992b5-86f2-4dff-b132-b7b22e6e9629","Type":"ContainerStarted","Data":"cc04411b72d0cd5fe8e0d92631464989eb7ce415cb64d147a65f54a704d00351"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.043736 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx"] Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.044372 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" event={"ID":"735a4b10-b520-4e48-8cd0-fd47615af04b","Type":"ContainerStarted","Data":"e38ef37e69d9cc453566f2d4e616a4ca8be1397850db60647d419b6686184273"} Feb 16 21:40:11 crc kubenswrapper[4792]: W0216 21:40:11.047522 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e236ddc_88ad_474a_b7c2_ada364746f6d.slice/crio-ebbae3b2183e97c0c894114b255932f60bdfc5786707b4416d5110a3d3c9f890 WatchSource:0}: Error finding container ebbae3b2183e97c0c894114b255932f60bdfc5786707b4416d5110a3d3c9f890: Status 404 returned error can't find the container with id ebbae3b2183e97c0c894114b255932f60bdfc5786707b4416d5110a3d3c9f890 Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.056736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" event={"ID":"c554cead-1e24-4255-9682-6a0ddb6e54b6","Type":"ContainerStarted","Data":"44eea4b2bc204b1d789073ebd1824bf57454a2e17efb10f29aa5f2517a0fa2db"} Feb 16 21:40:11 crc kubenswrapper[4792]: W0216 21:40:11.064852 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a747bf_419d_47c3_bd88_628deb937dc7.slice/crio-3d2000b77668ce7906ab60fce72687016e8b0e062c926c9332f30102512effe1 WatchSource:0}: Error finding container 3d2000b77668ce7906ab60fce72687016e8b0e062c926c9332f30102512effe1: Status 404 returned error can't find the container with id 3d2000b77668ce7906ab60fce72687016e8b0e062c926c9332f30102512effe1 Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.069433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2k2ct" event={"ID":"1f3f794e-3279-48fc-a684-e6d40fadd760","Type":"ContainerStarted","Data":"7bd8db99b472bc6ebf0bd5d190b69c962549d0f7b837fcd8c0bba3a170cdfeeb"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.069494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2k2ct" event={"ID":"1f3f794e-3279-48fc-a684-e6d40fadd760","Type":"ContainerStarted","Data":"2de434204e83b9702bdc7615edbc25178c9042622a5def9da04e3ed63bd8d436"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.076099 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6"] Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.080084 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t4mfn" podStartSLOduration=119.080067981 podStartE2EDuration="1m59.080067981s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.069266623 +0000 UTC m=+143.722545514" watchObservedRunningTime="2026-02-16 21:40:11.080067981 +0000 UTC m=+143.733346872" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.080736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" event={"ID":"5d2adadd-eb49-4e47-bd5d-30b77fbbe635","Type":"ContainerStarted","Data":"c3a36eb1987a42ea15b675843de8c102cd2941f041a137d8c551fceeeb41ea55"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.083997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tr7np" event={"ID":"ae243370-753c-48cb-b885-b4bf62dd55ef","Type":"ContainerStarted","Data":"5be3df284be45201565d60b10dd1695a50b44f354cb8f327798cb7ea7946fdd8"} Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.086093 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-gd457 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.086151 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gd457" podUID="5e2db923-4a84-4a7d-8507-065f4920080d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.104143 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.106361 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.60634382 +0000 UTC m=+144.259622711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.117222 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh"] Feb 16 21:40:11 crc kubenswrapper[4792]: W0216 21:40:11.141047 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7122eb67_c55a_4ec5_a27e_c7a3dc24c0d8.slice/crio-d742d1bd9bc1ab0923431e0de2844a27a2537d157c36519d1c7656f60ff9d0ca WatchSource:0}: Error finding container d742d1bd9bc1ab0923431e0de2844a27a2537d157c36519d1c7656f60ff9d0ca: Status 404 returned error can't find the container with id d742d1bd9bc1ab0923431e0de2844a27a2537d157c36519d1c7656f60ff9d0ca Feb 16 21:40:11 crc kubenswrapper[4792]: W0216 21:40:11.194566 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1350e708_602a_4919_9178_424fc36b043b.slice/crio-13f3e7bfca2440f5c0d1bb36c8d5ff44c335bfd6e4eb5006714a3f353c831c53 WatchSource:0}: Error finding container 13f3e7bfca2440f5c0d1bb36c8d5ff44c335bfd6e4eb5006714a3f353c831c53: Status 404 returned error can't find the container with id 13f3e7bfca2440f5c0d1bb36c8d5ff44c335bfd6e4eb5006714a3f353c831c53 Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.207288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.208901 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.708887313 +0000 UTC m=+144.362166194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.214923 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v962t"] Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.307909 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.308120 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.808086331 +0000 UTC m=+144.461365222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.308453 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.308778 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.808767651 +0000 UTC m=+144.462046542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.401481 4792 csr.go:261] certificate signing request csr-4wmv8 is approved, waiting to be issued Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.408035 4792 csr.go:257] certificate signing request csr-4wmv8 is issued Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.409745 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.410143 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:11.910126921 +0000 UTC m=+144.563405812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.516092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.516581 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.016569865 +0000 UTC m=+144.669848756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.577536 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.578869 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.578904 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.619124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.619243 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.119228272 +0000 UTC m=+144.772507153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.619348 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.619656 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.119649705 +0000 UTC m=+144.772928596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.719451 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ddgfq" podStartSLOduration=119.719429439 podStartE2EDuration="1m59.719429439s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.719397068 +0000 UTC m=+144.372675959" watchObservedRunningTime="2026-02-16 21:40:11.719429439 +0000 UTC m=+144.372708340" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.719985 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.720325 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.220307874 +0000 UTC m=+144.873586765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.799648 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sn4zb" podStartSLOduration=119.799625315 podStartE2EDuration="1m59.799625315s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.795909549 +0000 UTC m=+144.449188430" watchObservedRunningTime="2026-02-16 21:40:11.799625315 +0000 UTC m=+144.452904206" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.825533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.825915 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.325900515 +0000 UTC m=+144.979179406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.843129 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" podStartSLOduration=119.843112825 podStartE2EDuration="1m59.843112825s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.84219906 +0000 UTC m=+144.495477941" watchObservedRunningTime="2026-02-16 21:40:11.843112825 +0000 UTC m=+144.496391716" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.877569 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6kvt2" podStartSLOduration=119.877551957 podStartE2EDuration="1m59.877551957s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.876108236 +0000 UTC m=+144.529387137" watchObservedRunningTime="2026-02-16 21:40:11.877551957 +0000 UTC m=+144.530830858" Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.926828 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.926982 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.426961536 +0000 UTC m=+145.080240437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.927367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:11 crc kubenswrapper[4792]: E0216 21:40:11.932953 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.432933316 +0000 UTC m=+145.086212207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:11 crc kubenswrapper[4792]: I0216 21:40:11.963630 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tr7np" podStartSLOduration=119.96361421 podStartE2EDuration="1m59.96361421s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.920828051 +0000 UTC m=+144.574106952" watchObservedRunningTime="2026-02-16 21:40:11.96361421 +0000 UTC m=+144.616893101" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.033102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.033431 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.533417151 +0000 UTC m=+145.186696042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.069543 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2k2ct" podStartSLOduration=120.06952578 podStartE2EDuration="2m0.06952578s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.965759082 +0000 UTC m=+144.619037973" watchObservedRunningTime="2026-02-16 21:40:12.06952578 +0000 UTC m=+144.722804671" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.104101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" event={"ID":"59a735fb-20bd-48e7-9c0c-f79fe28c6190","Type":"ContainerStarted","Data":"67d8264594960055c5a74fe8637eddb7df7e715f553b1cfd2dbd6597f6437934"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.121217 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" event={"ID":"85aa40ba-6873-4c3d-9396-760b4597d183","Type":"ContainerStarted","Data":"fea6c032850837cff66ed3ee1c2f53d385ea5b25e7e8872839ded1c5e2de9e8f"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.134648 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.135738 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.635726427 +0000 UTC m=+145.289005308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.163545 4792 generic.go:334] "Generic (PLEG): container finished" podID="735a4b10-b520-4e48-8cd0-fd47615af04b" containerID="0af0e55c17626f34c5830516b26b544020ab712dcbe3065b0d0225ce09b8d706" exitCode=0 Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.163669 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" event={"ID":"735a4b10-b520-4e48-8cd0-fd47615af04b","Type":"ContainerDied","Data":"0af0e55c17626f34c5830516b26b544020ab712dcbe3065b0d0225ce09b8d706"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.180382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z8w5w" event={"ID":"acfdd228-16ae-48f8-9737-c57e42024344","Type":"ContainerStarted","Data":"ca7577b5462d96d9027530c9cad879a4521ff898f85dd22232186e29993936f2"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.187408 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7tzmh" podStartSLOduration=120.187393561 podStartE2EDuration="2m0.187393561s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.072121204 +0000 UTC m=+144.725400095" watchObservedRunningTime="2026-02-16 21:40:12.187393561 +0000 UTC m=+144.840672452" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.232444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" event={"ID":"a1a69fa0-202e-42db-905c-8cc07f3ffa24","Type":"ContainerStarted","Data":"1de1e4f7aac6c580be3803a3a04144ec6795cac2d071521f66e7d9ad89fd9ec2"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.232488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" event={"ID":"a1a69fa0-202e-42db-905c-8cc07f3ffa24","Type":"ContainerStarted","Data":"545b8b8429dd84914e0d921775a20f8675936e205739bd25529a8d420ab9c2df"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.236082 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.237235 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.737193831 +0000 UTC m=+145.390472752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.251520 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z8w5w" podStartSLOduration=6.251503239 podStartE2EDuration="6.251503239s" podCreationTimestamp="2026-02-16 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.213542276 +0000 UTC m=+144.866821167" watchObservedRunningTime="2026-02-16 21:40:12.251503239 +0000 UTC m=+144.904782130" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.259682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" event={"ID":"3e236ddc-88ad-474a-b7c2-ada364746f6d","Type":"ContainerStarted","Data":"bd3cd51b72a803705364a261929efd06108fb433b4a5381a673caf841f50635c"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.259745 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.259759 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" event={"ID":"3e236ddc-88ad-474a-b7c2-ada364746f6d","Type":"ContainerStarted","Data":"ebbae3b2183e97c0c894114b255932f60bdfc5786707b4416d5110a3d3c9f890"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.261856 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hlxg6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.261898 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" podUID="3e236ddc-88ad-474a-b7c2-ada364746f6d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.268086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-72gf6" event={"ID":"a78bbde7-7601-41dc-a9ef-a326cd6da349","Type":"ContainerStarted","Data":"43fc8a264af2243e53af88e021dec20f2dceb79a4374a3c4ddd11495027ab34c"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.268137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-72gf6" event={"ID":"a78bbde7-7601-41dc-a9ef-a326cd6da349","Type":"ContainerStarted","Data":"30b428c81da04b1ee50492d01a4f6d1ff9a6c3bfc331088a1358be21ae968638"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.269042 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.269998 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-72gf6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.270028 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-72gf6" podUID="a78bbde7-7601-41dc-a9ef-a326cd6da349" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.284475 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" podStartSLOduration=120.284461758 podStartE2EDuration="2m0.284461758s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.283953824 +0000 UTC m=+144.937232715" watchObservedRunningTime="2026-02-16 21:40:12.284461758 +0000 UTC m=+144.937740649" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.286479 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhqxb" podStartSLOduration=120.286469025 podStartE2EDuration="2m0.286469025s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.253243838 +0000 UTC m=+144.906522719" watchObservedRunningTime="2026-02-16 21:40:12.286469025 +0000 UTC m=+144.939747906" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.304720 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" event={"ID":"ba97d89e-7ec1-423e-b15a-a44253eac499","Type":"ContainerStarted","Data":"a54bc3d787343221558a3db41b0639134ebde9d83f99484ef7aff4599c32acf0"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.317798 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" event={"ID":"4b48f63c-36d5-48ac-98c0-fe4313495425","Type":"ContainerStarted","Data":"e5467b23c053688447b4c660913f6788324a76a6810fea88c9c40f738f4f5f97"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.329203 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hjb5c" event={"ID":"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8","Type":"ContainerStarted","Data":"d742d1bd9bc1ab0923431e0de2844a27a2537d157c36519d1c7656f60ff9d0ca"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.335704 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-72gf6" podStartSLOduration=120.320045143 podStartE2EDuration="2m0.320045143s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.313023233 +0000 UTC m=+144.966302134" watchObservedRunningTime="2026-02-16 21:40:12.320045143 +0000 UTC m=+144.973324024" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.337886 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.338220 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.83820605 +0000 UTC m=+145.491484941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.346990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" event={"ID":"74c00cd5-2613-4930-9091-9061ea9496bf","Type":"ContainerStarted","Data":"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.347423 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.349693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" event={"ID":"ae258fd6-b8cc-4fe1-82f3-0717b513d66a","Type":"ContainerStarted","Data":"4ec9ad78a52a85ae89324cf28e58b0c611d2d29045d490b973964bae51c83446"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.357722 4792 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nwvtk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.357771 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.358144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8mwwl" podStartSLOduration=120.358109938 podStartE2EDuration="2m0.358109938s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.347797214 +0000 UTC m=+145.001076105" watchObservedRunningTime="2026-02-16 21:40:12.358109938 +0000 UTC m=+145.011388829" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.358579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" event={"ID":"eb35cffd-4266-41df-89cc-d136fd0f6954","Type":"ContainerStarted","Data":"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.359323 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.367131 4792 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jx4dt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.367201 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.372955 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" podStartSLOduration=120.372934801 podStartE2EDuration="2m0.372934801s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.372100597 +0000 UTC m=+145.025379488" watchObservedRunningTime="2026-02-16 21:40:12.372934801 +0000 UTC m=+145.026213692" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.373085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" event={"ID":"1350e708-602a-4919-9178-424fc36b043b","Type":"ContainerStarted","Data":"13f3e7bfca2440f5c0d1bb36c8d5ff44c335bfd6e4eb5006714a3f353c831c53"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.385157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" event={"ID":"289de29e-7a1c-4076-9aa4-b829a2f9b004","Type":"ContainerStarted","Data":"dab090c04b1d3a73b1541d7de8c68a1aec79d55c0186701d8e348f979aecf407"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.386855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zpgg" event={"ID":"5763ee94-31ba-43bf-8aaa-c943fa59c080","Type":"ContainerStarted","Data":"2b6bf284667cd5bc64712ef994e7e708650a3ff6145ba5383adb018d756451a9"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.395673 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" event={"ID":"156ded60-abce-4ec4-912b-cbfece0f8d30","Type":"ContainerStarted","Data":"1e9a1eb57e33b1ff69a5bef01a249b7a5438f4489d64ad74977383652c7b0275"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.404156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" event={"ID":"86214154-257c-46e0-8f95-8a16bd86f9ec","Type":"ContainerStarted","Data":"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.404220 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.404235 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" event={"ID":"86214154-257c-46e0-8f95-8a16bd86f9ec","Type":"ContainerStarted","Data":"5c7181453180429c40b6b468d9d7a719ce6fbd3cd941593af41254c66a887a0b"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.405233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" event={"ID":"2cb51e3c-4f03-4e68-91fe-838816d8a376","Type":"ContainerStarted","Data":"46c80a6411ec1c3f6c9bf13dca4c1938b3e52329f7e831ff15fdbd080372f05f"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.406845 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" event={"ID":"b32c7a47-9e78-4732-a919-4cb62dc13f06","Type":"ContainerStarted","Data":"f83fb87462fd5ef164d162011b56d010308b121c58dc7e6d1bef06718c65efea"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.408003 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" event={"ID":"2b306c2d-5380-4048-aac2-26c834e948cc","Type":"ContainerStarted","Data":"acd6e71dc17b16fefcc73457b376fcb85ce27b05b240c2d52cec4cc27ac9ad9f"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.408617 4792 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r7nkn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.408670 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.416434 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 21:35:11 +0000 UTC, rotation deadline is 2027-01-03 19:45:48.367111246 +0000 UTC Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.416462 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7702h5m35.950651784s for next certificate rotation Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.433523 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cr85f" podStartSLOduration=120.433509998 podStartE2EDuration="2m0.433509998s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.401073493 +0000 UTC m=+145.054352384" watchObservedRunningTime="2026-02-16 21:40:12.433509998 +0000 UTC m=+145.086788889" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.433871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" event={"ID":"6603585f-6685-44a6-b3c8-1e938e10cbb4","Type":"ContainerStarted","Data":"4c2a08404ddd2a5c9e8536f85c182c01863515cbe59ff24cc5ae037976404472"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.434777 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-g67z5" podStartSLOduration=120.434773544 podStartE2EDuration="2m0.434773544s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.433326032 +0000 UTC m=+145.086604923" watchObservedRunningTime="2026-02-16 21:40:12.434773544 +0000 UTC m=+145.088052435" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.437725 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v962t" event={"ID":"fe6870e6-fb04-4e82-ac5a-f23d225cad7a","Type":"ContainerStarted","Data":"6fa621959ee64e0adbe4ebb3446ec1be504ce920532ee9371e4d27d3cc8173c4"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.440733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" event={"ID":"75a747bf-419d-47c3-bd88-628deb937dc7","Type":"ContainerStarted","Data":"3d2000b77668ce7906ab60fce72687016e8b0e062c926c9332f30102512effe1"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.441145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.443159 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:12.943139662 +0000 UTC m=+145.596418543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.483506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" event={"ID":"c554cead-1e24-4255-9682-6a0ddb6e54b6","Type":"ContainerStarted","Data":"e5eda7e310ee42a562ad0cd0307c56a8080631867543e6337375877b95aa528c"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.485962 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.485985 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" event={"ID":"5d2adadd-eb49-4e47-bd5d-30b77fbbe635","Type":"ContainerStarted","Data":"afe2a8c3eb89306e7ffcb946c96f153c6c4bb64589fb12fe2d466a86bc544c94"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.487872 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6grsl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.487933 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" podUID="5d2adadd-eb49-4e47-bd5d-30b77fbbe635" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.493823 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" podStartSLOduration=120.493613801 podStartE2EDuration="2m0.493613801s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.486942841 +0000 UTC m=+145.140221742" watchObservedRunningTime="2026-02-16 21:40:12.493613801 +0000 UTC m=+145.146892692" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.496867 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" event={"ID":"14e13832-467f-4f02-9ded-be8ca6bc6ed2","Type":"ContainerStarted","Data":"9cc26e62895261610b83e7bbae46155020fce84e4dabb8523d4696cd4b10fa13"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.520948 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9zpgg" podStartSLOduration=6.52093442 podStartE2EDuration="6.52093442s" podCreationTimestamp="2026-02-16 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.520031955 +0000 UTC m=+145.173310856" watchObservedRunningTime="2026-02-16 21:40:12.52093442 +0000 UTC m=+145.174213311" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.543002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.545405 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t8gt4" podStartSLOduration=120.542127654 podStartE2EDuration="2m0.542127654s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.541777254 +0000 UTC m=+145.195056145" watchObservedRunningTime="2026-02-16 21:40:12.542127654 +0000 UTC m=+145.195406545" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.546710 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.046696945 +0000 UTC m=+145.699975836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.549521 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" event={"ID":"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd","Type":"ContainerStarted","Data":"65a62bf4a83b14b715a20464c141e4270c197c56978894457a161559c2551a53"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.553893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" event={"ID":"18d326ed-a5e0-4663-bec0-8ee429a44c89","Type":"ContainerStarted","Data":"a770c4b144895b8baf5eb5ab279e0cc61bd2fce83e4f309c96959409f9085944"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.566622 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" event={"ID":"68497d64-90d5-4346-aad5-abf525df6845","Type":"ContainerStarted","Data":"41f9f0a738636422c37cd9b5cfeab5c3246a804f019d0f737995e89e8ac2f7c0"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.567642 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ncn6b" podStartSLOduration=120.567623371 podStartE2EDuration="2m0.567623371s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.567415516 +0000 UTC m=+145.220694407" watchObservedRunningTime="2026-02-16 21:40:12.567623371 +0000 UTC m=+145.220902262" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.587624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" event={"ID":"1fd5e410-68ff-42f7-a7fb-f138c0eff419","Type":"ContainerStarted","Data":"194e13a8dd55a121327d6bb71d8fa2d7f41f77829e4f592995e9587069f8c223"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.587665 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" event={"ID":"1fd5e410-68ff-42f7-a7fb-f138c0eff419","Type":"ContainerStarted","Data":"ad45a3ae3ae105ffe3a860a445e68fd7486b66feb3c19383900e5c8eff25b517"} Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.588506 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-gd457 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.591685 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gd457" podUID="5e2db923-4a84-4a7d-8507-065f4920080d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.596996 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:12 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:12 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:12 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.597045 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.605930 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-xcvfd" podStartSLOduration=120.605911063 podStartE2EDuration="2m0.605911063s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.601486427 +0000 UTC m=+145.254765328" watchObservedRunningTime="2026-02-16 21:40:12.605911063 +0000 UTC m=+145.259189954" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.628852 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdk54" podStartSLOduration=120.628835846 podStartE2EDuration="2m0.628835846s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.628168067 +0000 UTC m=+145.281446958" watchObservedRunningTime="2026-02-16 21:40:12.628835846 +0000 UTC m=+145.282114727" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.648503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.648725 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.148700943 +0000 UTC m=+145.801979834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.648803 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.650067 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.150059182 +0000 UTC m=+145.803338073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.667538 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" podStartSLOduration=120.667518819 podStartE2EDuration="2m0.667518819s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.657570895 +0000 UTC m=+145.310849786" watchObservedRunningTime="2026-02-16 21:40:12.667518819 +0000 UTC m=+145.320797710" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.691465 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" podStartSLOduration=120.691440431 podStartE2EDuration="2m0.691440431s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.681781185 +0000 UTC m=+145.335060096" watchObservedRunningTime="2026-02-16 21:40:12.691440431 +0000 UTC m=+145.344719322" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.756622 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.757085 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.257067402 +0000 UTC m=+145.910346293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.759819 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bnsxs" podStartSLOduration=120.75980063 podStartE2EDuration="2m0.75980063s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.757255337 +0000 UTC m=+145.410534248" watchObservedRunningTime="2026-02-16 21:40:12.75980063 +0000 UTC m=+145.413079521" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.760920 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-snd9g" podStartSLOduration=120.760912062 podStartE2EDuration="2m0.760912062s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.719909672 +0000 UTC m=+145.373188563" watchObservedRunningTime="2026-02-16 21:40:12.760912062 +0000 UTC m=+145.414190953" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.793537 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" podStartSLOduration=120.793517121 podStartE2EDuration="2m0.793517121s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:12.791794452 +0000 UTC m=+145.445073353" watchObservedRunningTime="2026-02-16 21:40:12.793517121 +0000 UTC m=+145.446796012" Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.859248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.859581 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.359569904 +0000 UTC m=+146.012848795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.959825 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.960042 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.460007698 +0000 UTC m=+146.113286599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:12 crc kubenswrapper[4792]: I0216 21:40:12.960143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:12 crc kubenswrapper[4792]: E0216 21:40:12.960567 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.460552753 +0000 UTC m=+146.113831644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.060877 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.061062 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.561036568 +0000 UTC m=+146.214315459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.061192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.061488 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.56147651 +0000 UTC m=+146.214755401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.162824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.163054 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.663022096 +0000 UTC m=+146.316300997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.163196 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.163453 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.663442118 +0000 UTC m=+146.316721009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.264102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.264261 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.764232041 +0000 UTC m=+146.417510932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.264734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.265059 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.765043655 +0000 UTC m=+146.418322546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.365773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.365906 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.8658786 +0000 UTC m=+146.519157491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.366096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.366371 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.866359103 +0000 UTC m=+146.519637994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.467411 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.467617 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.967579958 +0000 UTC m=+146.620858849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.467835 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.468120 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:13.968113034 +0000 UTC m=+146.621391925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.569340 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.569542 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.069514325 +0000 UTC m=+146.722793226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.569712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.570040 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.07003029 +0000 UTC m=+146.723309181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.581364 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:13 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:13 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:13 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.581422 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.597405 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" event={"ID":"75a747bf-419d-47c3-bd88-628deb937dc7","Type":"ContainerStarted","Data":"f710e343cb71fd259d447172e9e9cdaee9d35948c938f5e1b913c5c5d841cf9e"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.597449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" event={"ID":"75a747bf-419d-47c3-bd88-628deb937dc7","Type":"ContainerStarted","Data":"cea7a8285015838d84f65d62e933df0fe2b25615e149c352ceaab4d9a66f58f5"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.599551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6btrx" event={"ID":"6e2d2b51-afe4-44d1-9c18-0bcef522d6dd","Type":"ContainerStarted","Data":"a0188c7393e592bdc2fa94a9f664e80ed5382c3288bd6c9da2b89fa4d27cb628"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.608351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" event={"ID":"735a4b10-b520-4e48-8cd0-fd47615af04b","Type":"ContainerStarted","Data":"5ce1173d50a5977e42c5d81037da9c5353f1fc81cd91e743dd9bd0574c68ab22"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.608512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.611415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hjb5c" event={"ID":"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8","Type":"ContainerStarted","Data":"a5745faa4867562a6a7d11df02084ca6f65ea4426801d71f7d8e31157f751e78"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.611437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hjb5c" event={"ID":"7122eb67-c55a-4ec5-a27e-c7a3dc24c0d8","Type":"ContainerStarted","Data":"4824ea43365c589861389ee0c2ff2c988337235f0cc14ed7ea984ac982520612"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.611871 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.614056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" event={"ID":"1350e708-602a-4919-9178-424fc36b043b","Type":"ContainerStarted","Data":"d33aa377bee14fc4d7b825019150d63c83b6867ad63fc9283b64993193d4de00"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.614119 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" event={"ID":"1350e708-602a-4919-9178-424fc36b043b","Type":"ContainerStarted","Data":"9079fe668c58a2f35d1911b7025c1fe19f9a56aa4d63b0288af5ad74ca69080a"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.616346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" event={"ID":"ec33f265-8d79-4cf8-9565-ddc375565069","Type":"ContainerStarted","Data":"8059be7c83ac5ed707790708a8a9158e0d90b21d4a5d50c54e6a1f05f4e62aa0"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.616373 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" event={"ID":"ec33f265-8d79-4cf8-9565-ddc375565069","Type":"ContainerStarted","Data":"9189bde6a7e8f09815ef5ade18316a20240d31f1956d4c3b2eac7629922ee646"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.619146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" event={"ID":"b32c7a47-9e78-4732-a919-4cb62dc13f06","Type":"ContainerStarted","Data":"e2f5f755381adbc68bda0eb49f8c2ed51ed8cc0b82d8662e9296d12a5da191fb"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.619182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" event={"ID":"b32c7a47-9e78-4732-a919-4cb62dc13f06","Type":"ContainerStarted","Data":"62fe75cf48919cf3ff97e981f4a64b15ffa9bf1b85fbc54be438471134f17dbe"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.621190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v962t" event={"ID":"fe6870e6-fb04-4e82-ac5a-f23d225cad7a","Type":"ContainerStarted","Data":"1fe5744a74c2ddc6ce5ccd646468128219b57b52c414c97ede88e3485bfe1705"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.622685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" event={"ID":"c554cead-1e24-4255-9682-6a0ddb6e54b6","Type":"ContainerStarted","Data":"78a4fbac39b45d039a4c7a05c096bd34d9b78b3dcb23d098fc02e7b8a8444530"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.622802 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.624399 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" event={"ID":"18d326ed-a5e0-4663-bec0-8ee429a44c89","Type":"ContainerStarted","Data":"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.624865 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.629691 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ss6x2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.629727 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.634115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" event={"ID":"2cb51e3c-4f03-4e68-91fe-838816d8a376","Type":"ContainerStarted","Data":"f086701fa286eadcf38da7cf233dcbf9422a79a77be07bbf003ddaf47565f56f"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.636675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" event={"ID":"156ded60-abce-4ec4-912b-cbfece0f8d30","Type":"ContainerStarted","Data":"f71030cc2d201f7b8be484b725756bd9ee74407429b0b73068601287247089a4"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.637456 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.638438 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rjrpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.638475 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" podUID="156ded60-abce-4ec4-912b-cbfece0f8d30" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.641421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" event={"ID":"85aa40ba-6873-4c3d-9396-760b4597d183","Type":"ContainerStarted","Data":"b039e05865df50e3ce7d6ec7980b6579d3b439b3565567a3db1e8df221d39d24"} Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.642394 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-72gf6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.642432 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-72gf6" podUID="a78bbde7-7601-41dc-a9ef-a326cd6da349" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.648099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.669230 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hlxg6" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.670094 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zppvn" podStartSLOduration=121.670075392 podStartE2EDuration="2m1.670075392s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.63425318 +0000 UTC m=+146.287532071" watchObservedRunningTime="2026-02-16 21:40:13.670075392 +0000 UTC m=+146.323354283" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.685268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.686451 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.186432199 +0000 UTC m=+146.839711090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.710457 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" podStartSLOduration=121.710432423 podStartE2EDuration="2m1.710432423s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.709918167 +0000 UTC m=+146.363197058" watchObservedRunningTime="2026-02-16 21:40:13.710432423 +0000 UTC m=+146.363711314" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.713618 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" podStartSLOduration=121.713584332 podStartE2EDuration="2m1.713584332s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.668148507 +0000 UTC m=+146.321427398" watchObservedRunningTime="2026-02-16 21:40:13.713584332 +0000 UTC m=+146.366863223" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.758854 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wdfb6" podStartSLOduration=121.758837382 podStartE2EDuration="2m1.758837382s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.758155993 +0000 UTC m=+146.411434894" watchObservedRunningTime="2026-02-16 21:40:13.758837382 +0000 UTC m=+146.412116273" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.789535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.797514 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.297501705 +0000 UTC m=+146.950780596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.806275 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" podStartSLOduration=121.806262074 podStartE2EDuration="2m1.806262074s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.804647238 +0000 UTC m=+146.457926149" watchObservedRunningTime="2026-02-16 21:40:13.806262074 +0000 UTC m=+146.459540965" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.814694 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.815882 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.838124 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.891055 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.892091 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.392069371 +0000 UTC m=+147.045348262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.903026 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.950499 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" podStartSLOduration=121.950480246 podStartE2EDuration="2m1.950480246s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.938083532 +0000 UTC m=+146.591362433" watchObservedRunningTime="2026-02-16 21:40:13.950480246 +0000 UTC m=+146.603759137" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.952515 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hjb5c" podStartSLOduration=7.952507774 podStartE2EDuration="7.952507774s" podCreationTimestamp="2026-02-16 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:13.837955428 +0000 UTC m=+146.491234329" watchObservedRunningTime="2026-02-16 21:40:13.952507774 +0000 UTC m=+146.605786665" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.988538 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.988589 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.989977 4792 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5jwvl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.990023 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" podUID="ec33f265-8d79-4cf8-9565-ddc375565069" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 21:40:13 crc kubenswrapper[4792]: I0216 21:40:13.993053 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:13 crc kubenswrapper[4792]: E0216 21:40:13.993306 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.493297227 +0000 UTC m=+147.146576118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.054733 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-97jgh" podStartSLOduration=122.054713738 podStartE2EDuration="2m2.054713738s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:14.052865515 +0000 UTC m=+146.706144406" watchObservedRunningTime="2026-02-16 21:40:14.054713738 +0000 UTC m=+146.707992629" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.055754 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" podStartSLOduration=122.055747777 podStartE2EDuration="2m2.055747777s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:14.0232364 +0000 UTC m=+146.676515301" watchObservedRunningTime="2026-02-16 21:40:14.055747777 +0000 UTC m=+146.709026668" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.075095 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" podStartSLOduration=122.075073558 podStartE2EDuration="2m2.075073558s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:14.073651297 +0000 UTC m=+146.726930188" watchObservedRunningTime="2026-02-16 21:40:14.075073558 +0000 UTC m=+146.728352449" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.094105 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.094432 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.59441554 +0000 UTC m=+147.247694431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.189272 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sshb4" podStartSLOduration=122.189254834 podStartE2EDuration="2m2.189254834s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:14.188682927 +0000 UTC m=+146.841961838" watchObservedRunningTime="2026-02-16 21:40:14.189254834 +0000 UTC m=+146.842533715" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.197243 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.197530 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.69751841 +0000 UTC m=+147.350797301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.298513 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.298716 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.798690044 +0000 UTC m=+147.451968935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.298985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.299304 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.79928988 +0000 UTC m=+147.452568771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.339543 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.400323 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.400897 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:14.900878867 +0000 UTC m=+147.554157758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.501898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.502273 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.002256647 +0000 UTC m=+147.655535538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.580912 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:14 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:14 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:14 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.580976 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.602813 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.602998 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.102972988 +0000 UTC m=+147.756251879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.603179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.603444 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.103432632 +0000 UTC m=+147.756711523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.643080 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6grsl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.643145 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" podUID="5d2adadd-eb49-4e47-bd5d-30b77fbbe635" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.667175 4792 generic.go:334] "Generic (PLEG): container finished" podID="2cb51e3c-4f03-4e68-91fe-838816d8a376" containerID="f086701fa286eadcf38da7cf233dcbf9422a79a77be07bbf003ddaf47565f56f" exitCode=0 Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.667282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" event={"ID":"2cb51e3c-4f03-4e68-91fe-838816d8a376","Type":"ContainerDied","Data":"f086701fa286eadcf38da7cf233dcbf9422a79a77be07bbf003ddaf47565f56f"} Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.680196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v962t" event={"ID":"fe6870e6-fb04-4e82-ac5a-f23d225cad7a","Type":"ContainerStarted","Data":"8f54b6ac218726ced9334228cf6ac33c751e5461fbcdcd829022c0e2074f2163"} Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.681640 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ss6x2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.681671 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.688465 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nf4fz" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.701806 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rjrpc" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.704301 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.704478 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.204461342 +0000 UTC m=+147.857740233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.704848 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.705253 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.205235824 +0000 UTC m=+147.858514765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.791980 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-72gf6" Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.809018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.810480 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.310461235 +0000 UTC m=+147.963740126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:14 crc kubenswrapper[4792]: I0216 21:40:14.913340 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:14 crc kubenswrapper[4792]: E0216 21:40:14.913862 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.413850552 +0000 UTC m=+148.067129443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.007106 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.008640 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.011182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.016014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.016251 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.5162315 +0000 UTC m=+148.169510391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.020688 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.117921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.117965 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.117993 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jv7b\" (UniqueName: \"kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.118025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.118329 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.618317631 +0000 UTC m=+148.271596522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.200158 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.201020 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.203410 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.218465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.218748 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.218815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jv7b\" (UniqueName: \"kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.218855 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.219314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.219374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.219460 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.719443425 +0000 UTC m=+148.372722316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.229811 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.278264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jv7b\" (UniqueName: \"kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b\") pod \"certified-operators-jsr8l\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.322219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.322309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.322351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.322368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxlw\" (UniqueName: \"kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.322650 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.822639546 +0000 UTC m=+148.475918437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.345963 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.376608 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.377747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424157 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424429 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxlw\" (UniqueName: \"kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjg7\" (UniqueName: \"kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.424516 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.424624 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:15.924608334 +0000 UTC m=+148.577887225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.425039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.425126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.426100 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.471871 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxlw\" (UniqueName: \"kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw\") pod \"community-operators-d5zmq\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.495756 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b9fln" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.527935 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.528562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.528620 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjg7\" (UniqueName: \"kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.528645 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.528689 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.529052 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.529067 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.029050552 +0000 UTC m=+148.682329443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.551856 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.574647 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjg7\" (UniqueName: \"kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7\") pod \"certified-operators-bq6t6\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.605805 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:15 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:15 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:15 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.605859 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.609259 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.610584 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.626974 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.631050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.631353 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.131298897 +0000 UTC m=+148.784577788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.631393 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.631931 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.131924064 +0000 UTC m=+148.785202955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.734942 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.735610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.735658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg5hk\" (UniqueName: \"kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.735679 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.735840 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.235825216 +0000 UTC m=+148.889104107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.746746 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.747462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v962t" event={"ID":"fe6870e6-fb04-4e82-ac5a-f23d225cad7a","Type":"ContainerStarted","Data":"9aadccde0632be1679620ac5ed4353e186a23cd2c25c77799c12f5080ab32765"} Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.747492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v962t" event={"ID":"fe6870e6-fb04-4e82-ac5a-f23d225cad7a","Type":"ContainerStarted","Data":"1a494a17916935c4fdbdfd1619dc450e0c49f45017b6be62ee91ae0e0e6d94c7"} Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.795448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.796427 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v962t" podStartSLOduration=9.796418904 podStartE2EDuration="9.796418904s" podCreationTimestamp="2026-02-16 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:15.795907019 +0000 UTC m=+148.449185910" watchObservedRunningTime="2026-02-16 21:40:15.796418904 +0000 UTC m=+148.449697795" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.840587 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.840654 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.840719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg5hk\" (UniqueName: \"kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.840754 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.840780 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.841038 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.341026876 +0000 UTC m=+148.994305767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.844006 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.846830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.895336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.902199 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg5hk\" (UniqueName: \"kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk\") pod \"community-operators-xsrzg\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.940022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.962267 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.962742 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.962820 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.962872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:15 crc kubenswrapper[4792]: E0216 21:40:15.971972 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.471947648 +0000 UTC m=+149.125226539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.982108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.985092 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:15 crc kubenswrapper[4792]: I0216 21:40:15.985539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.070806 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:16 crc kubenswrapper[4792]: E0216 21:40:16.071430 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.571419114 +0000 UTC m=+149.224698005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cpksb" (UID: "abd983af-64e8-4770-842c-9335c49ae36d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.075433 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.087911 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.130548 4792 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.133423 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.133987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.137152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.137316 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.148347 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.171658 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:16 crc kubenswrapper[4792]: E0216 21:40:16.172154 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 21:40:16.672136036 +0000 UTC m=+149.325414927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.198948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.238438 4792 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T21:40:16.130575771Z","Handler":null,"Name":""} Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.245025 4792 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.245050 4792 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.265175 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.273139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.273240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.273271 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.291186 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.302228 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.302262 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.373774 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume\") pod \"2cb51e3c-4f03-4e68-91fe-838816d8a376\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.373954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2p7f\" (UniqueName: \"kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f\") pod \"2cb51e3c-4f03-4e68-91fe-838816d8a376\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.374014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume\") pod \"2cb51e3c-4f03-4e68-91fe-838816d8a376\" (UID: \"2cb51e3c-4f03-4e68-91fe-838816d8a376\") " Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.374186 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.374213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.374873 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cb51e3c-4f03-4e68-91fe-838816d8a376" (UID: "2cb51e3c-4f03-4e68-91fe-838816d8a376"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.375724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.403846 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.405225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cpksb\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.407638 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cb51e3c-4f03-4e68-91fe-838816d8a376" (UID: "2cb51e3c-4f03-4e68-91fe-838816d8a376"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.408028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f" (OuterVolumeSpecName: "kube-api-access-z2p7f") pod "2cb51e3c-4f03-4e68-91fe-838816d8a376" (UID: "2cb51e3c-4f03-4e68-91fe-838816d8a376"). InnerVolumeSpecName "kube-api-access-z2p7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.476273 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.476745 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb51e3c-4f03-4e68-91fe-838816d8a376-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.476761 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2p7f\" (UniqueName: \"kubernetes.io/projected/2cb51e3c-4f03-4e68-91fe-838816d8a376-kube-api-access-z2p7f\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.476772 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb51e3c-4f03-4e68-91fe-838816d8a376-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.482975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.498285 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.519493 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.585238 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:16 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:16 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:16 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.585280 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.631215 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.648638 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.699046 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.792186 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerID="06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407" exitCode=0 Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.792289 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerDied","Data":"06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407"} Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.792318 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerStarted","Data":"30cab9361689286c1167ae9a03666687c8219dc1871524b9a2856332987b8ca1"} Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.794954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" event={"ID":"2cb51e3c-4f03-4e68-91fe-838816d8a376","Type":"ContainerDied","Data":"46c80a6411ec1c3f6c9bf13dca4c1938b3e52329f7e831ff15fdbd080372f05f"} Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.794979 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c80a6411ec1c3f6c9bf13dca4c1938b3e52329f7e831ff15fdbd080372f05f" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.795055 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg" Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.798737 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerStarted","Data":"2269db805cfd73135390bedfb4f35ab2564c9a5172f9869a15963d5aa53bbebf"} Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.806041 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.864064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerStarted","Data":"19666d3b6f7b071b8869c19662e081ea02970c8d973e9c68f7e5111724ff3d3f"} Feb 16 21:40:16 crc kubenswrapper[4792]: W0216 21:40:16.880926 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c97dd9af6a99805154ca78e8d4846a1ab9c53ecd2499cd787bcad6dd244205a0 WatchSource:0}: Error finding container c97dd9af6a99805154ca78e8d4846a1ab9c53ecd2499cd787bcad6dd244205a0: Status 404 returned error can't find the container with id c97dd9af6a99805154ca78e8d4846a1ab9c53ecd2499cd787bcad6dd244205a0 Feb 16 21:40:16 crc kubenswrapper[4792]: I0216 21:40:16.881123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerStarted","Data":"04e75747ba6234e8a99953f5501497a6151b5109db04bd2aa36287fb7295332b"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.000695 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 21:40:17 crc kubenswrapper[4792]: W0216 21:40:17.119833 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-829e7f4bb0b18f6639d9aa1c13ea4fc64332468db65fe8b48811340fd0dacbc5 WatchSource:0}: Error finding container 829e7f4bb0b18f6639d9aa1c13ea4fc64332468db65fe8b48811340fd0dacbc5: Status 404 returned error can't find the container with id 829e7f4bb0b18f6639d9aa1c13ea4fc64332468db65fe8b48811340fd0dacbc5 Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.174066 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.187491 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:40:17 crc kubenswrapper[4792]: E0216 21:40:17.192187 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb51e3c-4f03-4e68-91fe-838816d8a376" containerName="collect-profiles" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.192222 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb51e3c-4f03-4e68-91fe-838816d8a376" containerName="collect-profiles" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.192451 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb51e3c-4f03-4e68-91fe-838816d8a376" containerName="collect-profiles" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.201623 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.207077 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.214084 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.295769 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5mt\" (UniqueName: \"kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.295819 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.295852 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.397519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.397572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.398040 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5mt\" (UniqueName: \"kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.398147 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.398194 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.417632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5mt\" (UniqueName: \"kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt\") pod \"redhat-marketplace-52qsq\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.565619 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.566968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.571586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.583061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.584977 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:17 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:17 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:17 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.585025 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.602495 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.602590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhnh\" (UniqueName: \"kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.602633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.704262 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.704686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhnh\" (UniqueName: \"kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.704715 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.705857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.705917 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.735115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhnh\" (UniqueName: \"kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh\") pod \"redhat-marketplace-j6frh\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.789657 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.888866 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.891037 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d50b3c26ffe94f4752f1c8d766ec1a300c8e9c0bf4056f0be93c11c742eb03d7"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.891074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"829e7f4bb0b18f6639d9aa1c13ea4fc64332468db65fe8b48811340fd0dacbc5"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.893094 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerStarted","Data":"78448ce1f49783a30ab5695e910f4aad33c54e3c8488a55836f88fc4427f0ea5"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.894621 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ca946f76da6acbde2bec0e71657ed50cfbda51b0103de4b202598b8b81b4053f"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.894800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"937ccafe060ca92654249ed209823444b5fcc535b773d440f087c61b3f16a6df"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.894915 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.902329 4792 generic.go:334] "Generic (PLEG): container finished" podID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerID="aac8f2628bff40d794294f021a770a133ad423ed351758a9ffa1f25fe6ad3bb7" exitCode=0 Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.902406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerDied","Data":"aac8f2628bff40d794294f021a770a133ad423ed351758a9ffa1f25fe6ad3bb7"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.904417 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"28f5cdc4-616a-4608-9e83-653048a0ba00","Type":"ContainerStarted","Data":"f00118a5f7142609acf1c98ea93361442cf1e1d76895780d58d19568af1b858b"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.904449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"28f5cdc4-616a-4608-9e83-653048a0ba00","Type":"ContainerStarted","Data":"4a9438eeb5f84438899ad3e684435aa65310ebd58d64492f09cdce19fe754242"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.928815 4792 generic.go:334] "Generic (PLEG): container finished" podID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerID="b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3" exitCode=0 Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.929006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerDied","Data":"b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.948772 4792 generic.go:334] "Generic (PLEG): container finished" podID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerID="96f4d804d04af14755d9059c127bdea5752c84370b315edd837db6d3c95d2c14" exitCode=0 Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.948911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerDied","Data":"96f4d804d04af14755d9059c127bdea5752c84370b315edd837db6d3c95d2c14"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.955414 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" event={"ID":"abd983af-64e8-4770-842c-9335c49ae36d","Type":"ContainerStarted","Data":"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.955455 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" event={"ID":"abd983af-64e8-4770-842c-9335c49ae36d","Type":"ContainerStarted","Data":"0d903b8cda092b0bf6f174e9f4f617971d20c2b847bfad6a66bcac797ed6f290"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.955494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.957847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"effcd7c42ff9975993a2aab3090b65d574d0e8946ef376a729f0e2300de846fc"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.957880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c97dd9af6a99805154ca78e8d4846a1ab9c53ecd2499cd787bcad6dd244205a0"} Feb 16 21:40:17 crc kubenswrapper[4792]: I0216 21:40:17.982640 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.982623023 podStartE2EDuration="1.982623023s" podCreationTimestamp="2026-02-16 21:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:17.966154623 +0000 UTC m=+150.619433524" watchObservedRunningTime="2026-02-16 21:40:17.982623023 +0000 UTC m=+150.635901914" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.000465 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" podStartSLOduration=126.000446342 podStartE2EDuration="2m6.000446342s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:17.999041962 +0000 UTC m=+150.652320863" watchObservedRunningTime="2026-02-16 21:40:18.000446342 +0000 UTC m=+150.653725233" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.078278 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.133796 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.134627 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.142906 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.143720 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.143800 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.169877 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.170902 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.173341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.180340 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.206578 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.224856 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.224939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.224986 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.225007 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.225151 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnft\" (UniqueName: \"kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: W0216 21:40:18.233505 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6dbc74b_0b2a_4615_b871_7c312e47854b.slice/crio-c5bb9b89d1500339f5de5f9c51a14bc32c58414944791aace573a2976c3afad8 WatchSource:0}: Error finding container c5bb9b89d1500339f5de5f9c51a14bc32c58414944791aace573a2976c3afad8: Status 404 returned error can't find the container with id c5bb9b89d1500339f5de5f9c51a14bc32c58414944791aace573a2976c3afad8 Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.326985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.326932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.327050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.327078 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.327100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.327164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnft\" (UniqueName: \"kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.328166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.328231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.349199 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.350683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnft\" (UniqueName: \"kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft\") pod \"redhat-operators-np9jz\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.504081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.513131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.529046 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-gd457 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.529103 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gd457" podUID="5e2db923-4a84-4a7d-8507-065f4920080d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.529538 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-gd457 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.529561 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gd457" podUID="5e2db923-4a84-4a7d-8507-065f4920080d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.576864 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.578547 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.585648 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:18 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:18 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:18 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.585710 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.590379 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.630624 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.630762 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzj2r\" (UniqueName: \"kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.630789 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.738737 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzj2r\" (UniqueName: \"kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.739042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.739081 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.739729 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.740331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.774753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzj2r\" (UniqueName: \"kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r\") pod \"redhat-operators-vpmg2\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.776587 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.912049 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.992060 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerID="e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b" exitCode=0 Feb 16 21:40:18 crc kubenswrapper[4792]: I0216 21:40:18.992111 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerDied","Data":"e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b"} Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:18.995797 4792 generic.go:334] "Generic (PLEG): container finished" podID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerID="36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b" exitCode=0 Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:18.995849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerDied","Data":"36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b"} Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:18.995872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerStarted","Data":"c5bb9b89d1500339f5de5f9c51a14bc32c58414944791aace573a2976c3afad8"} Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.005815 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.010903 4792 generic.go:334] "Generic (PLEG): container finished" podID="28f5cdc4-616a-4608-9e83-653048a0ba00" containerID="f00118a5f7142609acf1c98ea93361442cf1e1d76895780d58d19568af1b858b" exitCode=0 Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.011282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"28f5cdc4-616a-4608-9e83-653048a0ba00","Type":"ContainerDied","Data":"f00118a5f7142609acf1c98ea93361442cf1e1d76895780d58d19568af1b858b"} Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.016851 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5jwvl" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.030550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b76d4cda-6777-4442-a30a-ec36ffd7d108","Type":"ContainerStarted","Data":"6111df68557aa1255cecc6108403f654ec2e6064d0ce875260ebce405b94356b"} Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.092026 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.228729 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.228775 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.229000 4792 patch_prober.go:28] interesting pod/console-f9d7485db-tr7np container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.229036 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tr7np" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.297525 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.577812 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.586913 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:19 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:19 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:19 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.586975 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:19 crc kubenswrapper[4792]: I0216 21:40:19.625024 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6grsl" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.046674 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerID="80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155" exitCode=0 Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.046759 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerDied","Data":"80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155"} Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.046785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerStarted","Data":"d93ebf4067b98f6619523653a9cdaaf1947fde566707fb50e5efac6c1a01b0c0"} Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.049033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b76d4cda-6777-4442-a30a-ec36ffd7d108","Type":"ContainerStarted","Data":"63b71cef08a3336975f22f0de97a54e7ef99e8c80fd3badb51747616daac2f1f"} Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.056065 4792 generic.go:334] "Generic (PLEG): container finished" podID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerID="4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18" exitCode=0 Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.056218 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerDied","Data":"4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18"} Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.056271 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerStarted","Data":"c2d55592efefd820da8dd65967c3f3ed73a041eb94c8b2443d8e49005a4f6c90"} Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.082311 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.082289025 podStartE2EDuration="2.082289025s" podCreationTimestamp="2026-02-16 21:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:20.078873547 +0000 UTC m=+152.732152448" watchObservedRunningTime="2026-02-16 21:40:20.082289025 +0000 UTC m=+152.735567916" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.405230 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.558046 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir\") pod \"28f5cdc4-616a-4608-9e83-653048a0ba00\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.558112 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access\") pod \"28f5cdc4-616a-4608-9e83-653048a0ba00\" (UID: \"28f5cdc4-616a-4608-9e83-653048a0ba00\") " Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.558703 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "28f5cdc4-616a-4608-9e83-653048a0ba00" (UID: "28f5cdc4-616a-4608-9e83-653048a0ba00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.582263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "28f5cdc4-616a-4608-9e83-653048a0ba00" (UID: "28f5cdc4-616a-4608-9e83-653048a0ba00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.584047 4792 patch_prober.go:28] interesting pod/router-default-5444994796-2k2ct container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:40:20 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Feb 16 21:40:20 crc kubenswrapper[4792]: [+]process-running ok Feb 16 21:40:20 crc kubenswrapper[4792]: healthz check failed Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.584086 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2k2ct" podUID="1f3f794e-3279-48fc-a684-e6d40fadd760" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.659795 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28f5cdc4-616a-4608-9e83-653048a0ba00-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:20 crc kubenswrapper[4792]: I0216 21:40:20.659826 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28f5cdc4-616a-4608-9e83-653048a0ba00-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.066660 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.066627 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"28f5cdc4-616a-4608-9e83-653048a0ba00","Type":"ContainerDied","Data":"4a9438eeb5f84438899ad3e684435aa65310ebd58d64492f09cdce19fe754242"} Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.066757 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a9438eeb5f84438899ad3e684435aa65310ebd58d64492f09cdce19fe754242" Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.071078 4792 generic.go:334] "Generic (PLEG): container finished" podID="b76d4cda-6777-4442-a30a-ec36ffd7d108" containerID="63b71cef08a3336975f22f0de97a54e7ef99e8c80fd3badb51747616daac2f1f" exitCode=0 Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.071129 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b76d4cda-6777-4442-a30a-ec36ffd7d108","Type":"ContainerDied","Data":"63b71cef08a3336975f22f0de97a54e7ef99e8c80fd3badb51747616daac2f1f"} Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.579694 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:21 crc kubenswrapper[4792]: I0216 21:40:21.581679 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2k2ct" Feb 16 21:40:24 crc kubenswrapper[4792]: I0216 21:40:24.699758 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hjb5c" Feb 16 21:40:29 crc kubenswrapper[4792]: E0216 21:40:29.249560 4792 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.224s" Feb 16 21:40:29 crc kubenswrapper[4792]: I0216 21:40:29.250153 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:29 crc kubenswrapper[4792]: I0216 21:40:29.250262 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gd457" Feb 16 21:40:29 crc kubenswrapper[4792]: I0216 21:40:29.271533 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.427672 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.531981 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.532038 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.575077 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access\") pod \"b76d4cda-6777-4442-a30a-ec36ffd7d108\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.575168 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir\") pod \"b76d4cda-6777-4442-a30a-ec36ffd7d108\" (UID: \"b76d4cda-6777-4442-a30a-ec36ffd7d108\") " Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.575319 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b76d4cda-6777-4442-a30a-ec36ffd7d108" (UID: "b76d4cda-6777-4442-a30a-ec36ffd7d108"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.575517 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76d4cda-6777-4442-a30a-ec36ffd7d108-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.580752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b76d4cda-6777-4442-a30a-ec36ffd7d108" (UID: "b76d4cda-6777-4442-a30a-ec36ffd7d108"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:31 crc kubenswrapper[4792]: I0216 21:40:31.677175 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76d4cda-6777-4442-a30a-ec36ffd7d108-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:40:32 crc kubenswrapper[4792]: I0216 21:40:32.251238 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b76d4cda-6777-4442-a30a-ec36ffd7d108","Type":"ContainerDied","Data":"6111df68557aa1255cecc6108403f654ec2e6064d0ce875260ebce405b94356b"} Feb 16 21:40:32 crc kubenswrapper[4792]: I0216 21:40:32.251309 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6111df68557aa1255cecc6108403f654ec2e6064d0ce875260ebce405b94356b" Feb 16 21:40:32 crc kubenswrapper[4792]: I0216 21:40:32.251264 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 21:40:34 crc kubenswrapper[4792]: I0216 21:40:34.621550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:34 crc kubenswrapper[4792]: I0216 21:40:34.627695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8-metrics-certs\") pod \"network-metrics-daemon-sxb4b\" (UID: \"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8\") " pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:34 crc kubenswrapper[4792]: I0216 21:40:34.750822 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sxb4b" Feb 16 21:40:36 crc kubenswrapper[4792]: I0216 21:40:36.660022 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:40:46 crc kubenswrapper[4792]: E0216 21:40:46.694728 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 21:40:46 crc kubenswrapper[4792]: E0216 21:40:46.695256 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcxlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d5zmq_openshift-marketplace(edd14fca-8d4f-4537-94f9-cebf5ffe935c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 21:40:46 crc kubenswrapper[4792]: E0216 21:40:46.696413 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d5zmq" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" Feb 16 21:40:46 crc kubenswrapper[4792]: I0216 21:40:46.816791 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sxb4b"] Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.356878 4792 generic.go:334] "Generic (PLEG): container finished" podID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerID="4c2fbfa30ab3dbae99294c3f7d54495d221be8b08005cf42f124a94e20a99a0d" exitCode=0 Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.357154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerDied","Data":"4c2fbfa30ab3dbae99294c3f7d54495d221be8b08005cf42f124a94e20a99a0d"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.361275 4792 generic.go:334] "Generic (PLEG): container finished" podID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerID="b591a058e411473cff12307f644c0084d1daa4754ed34ff1fccf2d54ea28ab21" exitCode=0 Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.361349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerDied","Data":"b591a058e411473cff12307f644c0084d1daa4754ed34ff1fccf2d54ea28ab21"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.434893 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerID="6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d" exitCode=0 Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.434999 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerDied","Data":"6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.451548 4792 generic.go:334] "Generic (PLEG): container finished" podID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerID="9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539" exitCode=0 Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.451654 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerDied","Data":"9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.460226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" event={"ID":"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8","Type":"ContainerStarted","Data":"afeec6ac5ed93b15f4c9ab0e8129e5147496a948579764bd9038ed1e39f8dd68"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.460353 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" event={"ID":"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8","Type":"ContainerStarted","Data":"115d31def62ad07cb0c803a79efa8916d00de8133aab7ebdcfa6edbb72543736"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.480910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerStarted","Data":"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.490256 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerStarted","Data":"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884"} Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.496411 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerID="a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b" exitCode=0 Feb 16 21:40:47 crc kubenswrapper[4792]: I0216 21:40:47.497318 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerDied","Data":"a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b"} Feb 16 21:40:47 crc kubenswrapper[4792]: E0216 21:40:47.499825 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d5zmq" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" Feb 16 21:40:48 crc kubenswrapper[4792]: I0216 21:40:48.504839 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerID="5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419" exitCode=0 Feb 16 21:40:48 crc kubenswrapper[4792]: I0216 21:40:48.504959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerDied","Data":"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419"} Feb 16 21:40:48 crc kubenswrapper[4792]: I0216 21:40:48.510984 4792 generic.go:334] "Generic (PLEG): container finished" podID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerID="77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884" exitCode=0 Feb 16 21:40:48 crc kubenswrapper[4792]: I0216 21:40:48.511026 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerDied","Data":"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884"} Feb 16 21:40:48 crc kubenswrapper[4792]: I0216 21:40:48.513423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sxb4b" event={"ID":"9dd2ec1e-0eb5-45ac-ba7f-c40ca6f0cac8","Type":"ContainerStarted","Data":"f6a9507bcc2362dcb942ef355672dd64fa3bf05a81b2cb800b02b38559e393f7"} Feb 16 21:40:49 crc kubenswrapper[4792]: I0216 21:40:49.624174 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpskb" Feb 16 21:40:49 crc kubenswrapper[4792]: I0216 21:40:49.641434 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sxb4b" podStartSLOduration=157.641415046 podStartE2EDuration="2m37.641415046s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:48.569932378 +0000 UTC m=+181.223211269" watchObservedRunningTime="2026-02-16 21:40:49.641415046 +0000 UTC m=+182.294693937" Feb 16 21:40:50 crc kubenswrapper[4792]: I0216 21:40:50.540651 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerStarted","Data":"12ac958b55a34e45a07a1c582c25f55dd1da3729e8e3dcabc0a034e093e02ba0"} Feb 16 21:40:50 crc kubenswrapper[4792]: I0216 21:40:50.572181 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bq6t6" podStartSLOduration=4.442022021 podStartE2EDuration="35.572155122s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="2026-02-16 21:40:17.958488195 +0000 UTC m=+150.611767086" lastFinishedPulling="2026-02-16 21:40:49.088621296 +0000 UTC m=+181.741900187" observedRunningTime="2026-02-16 21:40:50.562204559 +0000 UTC m=+183.215483460" watchObservedRunningTime="2026-02-16 21:40:50.572155122 +0000 UTC m=+183.225434023" Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.547892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerStarted","Data":"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.550995 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerStarted","Data":"444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.553096 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerStarted","Data":"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.555556 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerStarted","Data":"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.557377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerStarted","Data":"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.559379 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerStarted","Data":"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c"} Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.576903 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-52qsq" podStartSLOduration=2.8184485820000003 podStartE2EDuration="34.576890167s" podCreationTimestamp="2026-02-16 21:40:17 +0000 UTC" firstStartedPulling="2026-02-16 21:40:18.994146572 +0000 UTC m=+151.647425463" lastFinishedPulling="2026-02-16 21:40:50.752588157 +0000 UTC m=+183.405867048" observedRunningTime="2026-02-16 21:40:51.574769777 +0000 UTC m=+184.228048668" watchObservedRunningTime="2026-02-16 21:40:51.576890167 +0000 UTC m=+184.230169048" Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.592423 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsrzg" podStartSLOduration=4.006235817 podStartE2EDuration="36.59240327s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="2026-02-16 21:40:17.912217476 +0000 UTC m=+150.565496367" lastFinishedPulling="2026-02-16 21:40:50.498384929 +0000 UTC m=+183.151663820" observedRunningTime="2026-02-16 21:40:51.591395601 +0000 UTC m=+184.244674502" watchObservedRunningTime="2026-02-16 21:40:51.59240327 +0000 UTC m=+184.245682161" Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.611866 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j6frh" podStartSLOduration=3.12878112 podStartE2EDuration="34.611848453s" podCreationTimestamp="2026-02-16 21:40:17 +0000 UTC" firstStartedPulling="2026-02-16 21:40:18.997038415 +0000 UTC m=+151.650317306" lastFinishedPulling="2026-02-16 21:40:50.480105728 +0000 UTC m=+183.133384639" observedRunningTime="2026-02-16 21:40:51.609658132 +0000 UTC m=+184.262937023" watchObservedRunningTime="2026-02-16 21:40:51.611848453 +0000 UTC m=+184.265127344" Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.631356 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-np9jz" podStartSLOduration=2.787822559 podStartE2EDuration="33.631334019s" podCreationTimestamp="2026-02-16 21:40:18 +0000 UTC" firstStartedPulling="2026-02-16 21:40:20.058714773 +0000 UTC m=+152.711993664" lastFinishedPulling="2026-02-16 21:40:50.902226233 +0000 UTC m=+183.555505124" observedRunningTime="2026-02-16 21:40:51.628419666 +0000 UTC m=+184.281698557" watchObservedRunningTime="2026-02-16 21:40:51.631334019 +0000 UTC m=+184.284612900" Feb 16 21:40:51 crc kubenswrapper[4792]: I0216 21:40:51.648706 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsr8l" podStartSLOduration=3.845288079 podStartE2EDuration="37.648687715s" podCreationTimestamp="2026-02-16 21:40:14 +0000 UTC" firstStartedPulling="2026-02-16 21:40:16.805682859 +0000 UTC m=+149.458961750" lastFinishedPulling="2026-02-16 21:40:50.609082505 +0000 UTC m=+183.262361386" observedRunningTime="2026-02-16 21:40:51.645515004 +0000 UTC m=+184.298793905" watchObservedRunningTime="2026-02-16 21:40:51.648687715 +0000 UTC m=+184.301966606" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.346953 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.347421 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.718989 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.742087 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpmg2" podStartSLOduration=7.117212572 podStartE2EDuration="37.742071707s" podCreationTimestamp="2026-02-16 21:40:18 +0000 UTC" firstStartedPulling="2026-02-16 21:40:20.048620436 +0000 UTC m=+152.701899327" lastFinishedPulling="2026-02-16 21:40:50.673479571 +0000 UTC m=+183.326758462" observedRunningTime="2026-02-16 21:40:51.668658154 +0000 UTC m=+184.321937045" watchObservedRunningTime="2026-02-16 21:40:55.742071707 +0000 UTC m=+188.395350598" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.748438 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.748474 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.769933 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.783167 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.941171 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.941218 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:55 crc kubenswrapper[4792]: I0216 21:40:55.978735 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.106619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.144686 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 21:40:56 crc kubenswrapper[4792]: E0216 21:40:56.144999 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f5cdc4-616a-4608-9e83-653048a0ba00" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.145018 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f5cdc4-616a-4608-9e83-653048a0ba00" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: E0216 21:40:56.145033 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d4cda-6777-4442-a30a-ec36ffd7d108" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.145040 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d4cda-6777-4442-a30a-ec36ffd7d108" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.145178 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d4cda-6777-4442-a30a-ec36ffd7d108" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.145193 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f5cdc4-616a-4608-9e83-653048a0ba00" containerName="pruner" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.145756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.148031 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.148031 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.152273 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.307369 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.307713 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.409190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.409325 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.410014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.434704 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.471232 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.635893 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.643757 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:40:56 crc kubenswrapper[4792]: I0216 21:40:56.674219 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.572072 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.573172 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.595739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"699b1416-5ca0-4d93-825c-93177a5d52a8","Type":"ContainerStarted","Data":"1ca762e2a9df58f451bbb126edcc201f06445f86a9f2f1760a4d4dce91c9ceff"} Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.618629 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.791362 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.823441 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.891087 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.891131 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:57 crc kubenswrapper[4792]: I0216 21:40:57.927315 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.514354 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.514615 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.559211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.605676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"699b1416-5ca0-4d93-825c-93177a5d52a8","Type":"ContainerStarted","Data":"f0b64c0db74e0b6ac98a8e44753adf09292a6cf978c8f96f35c616c77ea1d73b"} Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.606462 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bq6t6" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="registry-server" containerID="cri-o://12ac958b55a34e45a07a1c582c25f55dd1da3729e8e3dcabc0a034e093e02ba0" gracePeriod=2 Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.672256 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.681980 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.682185 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.688030 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.688019205 podStartE2EDuration="2.688019205s" podCreationTimestamp="2026-02-16 21:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:58.628794948 +0000 UTC m=+191.282073839" watchObservedRunningTime="2026-02-16 21:40:58.688019205 +0000 UTC m=+191.341298096" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.912865 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.912919 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:58 crc kubenswrapper[4792]: I0216 21:40:58.950407 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.630710 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.630952 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsrzg" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="registry-server" containerID="cri-o://444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed" gracePeriod=2 Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.631642 4792 generic.go:334] "Generic (PLEG): container finished" podID="699b1416-5ca0-4d93-825c-93177a5d52a8" containerID="f0b64c0db74e0b6ac98a8e44753adf09292a6cf978c8f96f35c616c77ea1d73b" exitCode=0 Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.631793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"699b1416-5ca0-4d93-825c-93177a5d52a8","Type":"ContainerDied","Data":"f0b64c0db74e0b6ac98a8e44753adf09292a6cf978c8f96f35c616c77ea1d73b"} Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.636515 4792 generic.go:334] "Generic (PLEG): container finished" podID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerID="12ac958b55a34e45a07a1c582c25f55dd1da3729e8e3dcabc0a034e093e02ba0" exitCode=0 Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.636607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerDied","Data":"12ac958b55a34e45a07a1c582c25f55dd1da3729e8e3dcabc0a034e093e02ba0"} Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.675442 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:40:59 crc kubenswrapper[4792]: E0216 21:40:59.810366 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a23b11_cd5b_4d2b_adcf_06a39a1c62d8.slice/crio-444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:40:59 crc kubenswrapper[4792]: I0216 21:40:59.909196 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.058774 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities\") pod \"9734b6b8-841c-437d-acf0-b1e3948ee61f\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.058828 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content\") pod \"9734b6b8-841c-437d-acf0-b1e3948ee61f\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.058891 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjg7\" (UniqueName: \"kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7\") pod \"9734b6b8-841c-437d-acf0-b1e3948ee61f\" (UID: \"9734b6b8-841c-437d-acf0-b1e3948ee61f\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.059463 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities" (OuterVolumeSpecName: "utilities") pod "9734b6b8-841c-437d-acf0-b1e3948ee61f" (UID: "9734b6b8-841c-437d-acf0-b1e3948ee61f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.064960 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7" (OuterVolumeSpecName: "kube-api-access-hzjg7") pod "9734b6b8-841c-437d-acf0-b1e3948ee61f" (UID: "9734b6b8-841c-437d-acf0-b1e3948ee61f"). InnerVolumeSpecName "kube-api-access-hzjg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.160244 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.160325 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjg7\" (UniqueName: \"kubernetes.io/projected/9734b6b8-841c-437d-acf0-b1e3948ee61f-kube-api-access-hzjg7\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.221450 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.333917 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9734b6b8-841c-437d-acf0-b1e3948ee61f" (UID: "9734b6b8-841c-437d-acf0-b1e3948ee61f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.367484 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9734b6b8-841c-437d-acf0-b1e3948ee61f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.644119 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6t6" event={"ID":"9734b6b8-841c-437d-acf0-b1e3948ee61f","Type":"ContainerDied","Data":"04e75747ba6234e8a99953f5501497a6151b5109db04bd2aa36287fb7295332b"} Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.644200 4792 scope.go:117] "RemoveContainer" containerID="12ac958b55a34e45a07a1c582c25f55dd1da3729e8e3dcabc0a034e093e02ba0" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.644304 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6t6" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.647110 4792 generic.go:334] "Generic (PLEG): container finished" podID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerID="444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed" exitCode=0 Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.647303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerDied","Data":"444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed"} Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.649451 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j6frh" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="registry-server" containerID="cri-o://720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227" gracePeriod=2 Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.678088 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.682440 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bq6t6"] Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.683611 4792 scope.go:117] "RemoveContainer" containerID="b591a058e411473cff12307f644c0084d1daa4754ed34ff1fccf2d54ea28ab21" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.699684 4792 scope.go:117] "RemoveContainer" containerID="96f4d804d04af14755d9059c127bdea5752c84370b315edd837db6d3c95d2c14" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.897806 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.951031 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.975543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content\") pod \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.975618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities\") pod \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.975689 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir\") pod \"699b1416-5ca0-4d93-825c-93177a5d52a8\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.975740 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg5hk\" (UniqueName: \"kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk\") pod \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\" (UID: \"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.975767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access\") pod \"699b1416-5ca0-4d93-825c-93177a5d52a8\" (UID: \"699b1416-5ca0-4d93-825c-93177a5d52a8\") " Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.976718 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "699b1416-5ca0-4d93-825c-93177a5d52a8" (UID: "699b1416-5ca0-4d93-825c-93177a5d52a8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.976926 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699b1416-5ca0-4d93-825c-93177a5d52a8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.977213 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities" (OuterVolumeSpecName: "utilities") pod "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" (UID: "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.980851 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "699b1416-5ca0-4d93-825c-93177a5d52a8" (UID: "699b1416-5ca0-4d93-825c-93177a5d52a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:00 crc kubenswrapper[4792]: I0216 21:41:00.980965 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk" (OuterVolumeSpecName: "kube-api-access-wg5hk") pod "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" (UID: "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8"). InnerVolumeSpecName "kube-api-access-wg5hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.044260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" (UID: "84a23b11-cd5b-4d2b-adcf-06a39a1c62d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.078102 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg5hk\" (UniqueName: \"kubernetes.io/projected/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-kube-api-access-wg5hk\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.078135 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/699b1416-5ca0-4d93-825c-93177a5d52a8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.078146 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.078158 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528213 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528435 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528453 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528470 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528479 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528492 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699b1416-5ca0-4d93-825c-93177a5d52a8" containerName="pruner" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528500 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="699b1416-5ca0-4d93-825c-93177a5d52a8" containerName="pruner" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528514 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="extract-utilities" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528522 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="extract-utilities" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528533 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="extract-content" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528541 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="extract-content" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528551 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="extract-utilities" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528559 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="extract-utilities" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.528575 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="extract-content" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528580 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="extract-content" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528685 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528695 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="699b1416-5ca0-4d93-825c-93177a5d52a8" containerName="pruner" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.528703 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" containerName="registry-server" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.529092 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.533199 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.533256 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.541570 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.582802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.582952 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.582975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.655719 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.657079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerStarted","Data":"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238"} Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.660214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsrzg" event={"ID":"84a23b11-cd5b-4d2b-adcf-06a39a1c62d8","Type":"ContainerDied","Data":"19666d3b6f7b071b8869c19662e081ea02970c8d973e9c68f7e5111724ff3d3f"} Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.660253 4792 scope.go:117] "RemoveContainer" containerID="444ee69a6b0aa7c863a3b508422084db79f3f61979efb78abf0f414bc5325bed" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.660253 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsrzg" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.665246 4792 generic.go:334] "Generic (PLEG): container finished" podID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerID="720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227" exitCode=0 Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.665289 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerDied","Data":"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227"} Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.665368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6frh" event={"ID":"b6dbc74b-0b2a-4615-b871-7c312e47854b","Type":"ContainerDied","Data":"c5bb9b89d1500339f5de5f9c51a14bc32c58414944791aace573a2976c3afad8"} Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.665375 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6frh" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.667264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"699b1416-5ca0-4d93-825c-93177a5d52a8","Type":"ContainerDied","Data":"1ca762e2a9df58f451bbb126edcc201f06445f86a9f2f1760a4d4dce91c9ceff"} Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.667288 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ca762e2a9df58f451bbb126edcc201f06445f86a9f2f1760a4d4dce91c9ceff" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.667322 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhnh\" (UniqueName: \"kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh\") pod \"b6dbc74b-0b2a-4615-b871-7c312e47854b\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683423 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities\") pod \"b6dbc74b-0b2a-4615-b871-7c312e47854b\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content\") pod \"b6dbc74b-0b2a-4615-b871-7c312e47854b\" (UID: \"b6dbc74b-0b2a-4615-b871-7c312e47854b\") " Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.683972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.684426 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities" (OuterVolumeSpecName: "utilities") pod "b6dbc74b-0b2a-4615-b871-7c312e47854b" (UID: "b6dbc74b-0b2a-4615-b871-7c312e47854b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.684575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.684676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.685885 4792 scope.go:117] "RemoveContainer" containerID="4c2fbfa30ab3dbae99294c3f7d54495d221be8b08005cf42f124a94e20a99a0d" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.690088 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh" (OuterVolumeSpecName: "kube-api-access-2dhnh") pod "b6dbc74b-0b2a-4615-b871-7c312e47854b" (UID: "b6dbc74b-0b2a-4615-b871-7c312e47854b"). InnerVolumeSpecName "kube-api-access-2dhnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.703967 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access\") pod \"installer-9-crc\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.707154 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.712825 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsrzg"] Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.716771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6dbc74b-0b2a-4615-b871-7c312e47854b" (UID: "b6dbc74b-0b2a-4615-b871-7c312e47854b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.717802 4792 scope.go:117] "RemoveContainer" containerID="aac8f2628bff40d794294f021a770a133ad423ed351758a9ffa1f25fe6ad3bb7" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.733719 4792 scope.go:117] "RemoveContainer" containerID="720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.747065 4792 scope.go:117] "RemoveContainer" containerID="9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.758507 4792 scope.go:117] "RemoveContainer" containerID="36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.773941 4792 scope.go:117] "RemoveContainer" containerID="720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.774406 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227\": container with ID starting with 720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227 not found: ID does not exist" containerID="720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.774458 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227"} err="failed to get container status \"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227\": rpc error: code = NotFound desc = could not find container \"720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227\": container with ID starting with 720a55d9fe666e1146ea8be01755e022c6296fe1ff71daa1e836c91bf6c3b227 not found: ID does not exist" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.774519 4792 scope.go:117] "RemoveContainer" containerID="9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.774974 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539\": container with ID starting with 9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539 not found: ID does not exist" containerID="9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.775013 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539"} err="failed to get container status \"9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539\": rpc error: code = NotFound desc = could not find container \"9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539\": container with ID starting with 9be6570cf736fb1eaaae07f936e59eec102898142418341a80182a5b84df8539 not found: ID does not exist" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.775042 4792 scope.go:117] "RemoveContainer" containerID="36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b" Feb 16 21:41:01 crc kubenswrapper[4792]: E0216 21:41:01.775332 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b\": container with ID starting with 36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b not found: ID does not exist" containerID="36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.775358 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b"} err="failed to get container status \"36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b\": rpc error: code = NotFound desc = could not find container \"36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b\": container with ID starting with 36cd2103bdeb67b8337271eb9ed946e37d1dc22aa5a5c20e34f24b2f9e90a57b not found: ID does not exist" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.784744 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhnh\" (UniqueName: \"kubernetes.io/projected/b6dbc74b-0b2a-4615-b871-7c312e47854b-kube-api-access-2dhnh\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.784768 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.784780 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dbc74b-0b2a-4615-b871-7c312e47854b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.877686 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:01 crc kubenswrapper[4792]: I0216 21:41:01.999492 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.006621 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6frh"] Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.038067 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a23b11-cd5b-4d2b-adcf-06a39a1c62d8" path="/var/lib/kubelet/pods/84a23b11-cd5b-4d2b-adcf-06a39a1c62d8/volumes" Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.038800 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9734b6b8-841c-437d-acf0-b1e3948ee61f" path="/var/lib/kubelet/pods/9734b6b8-841c-437d-acf0-b1e3948ee61f/volumes" Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.040127 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" path="/var/lib/kubelet/pods/b6dbc74b-0b2a-4615-b871-7c312e47854b/volumes" Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.044716 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 21:41:02 crc kubenswrapper[4792]: W0216 21:41:02.054819 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod720c35b9_54f6_4880_afd1_10a28ca5fbae.slice/crio-73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29 WatchSource:0}: Error finding container 73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29: Status 404 returned error can't find the container with id 73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29 Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.621051 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.621592 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpmg2" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="registry-server" containerID="cri-o://33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597" gracePeriod=2 Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.675329 4792 generic.go:334] "Generic (PLEG): container finished" podID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerID="4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238" exitCode=0 Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.675378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerDied","Data":"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238"} Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.683950 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"720c35b9-54f6-4880-afd1-10a28ca5fbae","Type":"ContainerStarted","Data":"cf01e74aab864932eb143fe7fcad4e72de0942742ef88aebad32ac16d2939eef"} Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.683990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"720c35b9-54f6-4880-afd1-10a28ca5fbae","Type":"ContainerStarted","Data":"73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29"} Feb 16 21:41:02 crc kubenswrapper[4792]: I0216 21:41:02.718673 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.7186498989999999 podStartE2EDuration="1.718649899s" podCreationTimestamp="2026-02-16 21:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:41:02.718276809 +0000 UTC m=+195.371555710" watchObservedRunningTime="2026-02-16 21:41:02.718649899 +0000 UTC m=+195.371928790" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.045174 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.099158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities\") pod \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.099270 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzj2r\" (UniqueName: \"kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r\") pod \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.099339 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content\") pod \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\" (UID: \"4e7e955d-adca-4bb7-97cd-c261aa9bd04a\") " Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.099914 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities" (OuterVolumeSpecName: "utilities") pod "4e7e955d-adca-4bb7-97cd-c261aa9bd04a" (UID: "4e7e955d-adca-4bb7-97cd-c261aa9bd04a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.107786 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r" (OuterVolumeSpecName: "kube-api-access-xzj2r") pod "4e7e955d-adca-4bb7-97cd-c261aa9bd04a" (UID: "4e7e955d-adca-4bb7-97cd-c261aa9bd04a"). InnerVolumeSpecName "kube-api-access-xzj2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.200554 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.200589 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzj2r\" (UniqueName: \"kubernetes.io/projected/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-kube-api-access-xzj2r\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.222923 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e7e955d-adca-4bb7-97cd-c261aa9bd04a" (UID: "4e7e955d-adca-4bb7-97cd-c261aa9bd04a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.302206 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e955d-adca-4bb7-97cd-c261aa9bd04a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.692352 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerID="33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597" exitCode=0 Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.692456 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpmg2" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.692462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerDied","Data":"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597"} Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.692572 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpmg2" event={"ID":"4e7e955d-adca-4bb7-97cd-c261aa9bd04a","Type":"ContainerDied","Data":"d93ebf4067b98f6619523653a9cdaaf1947fde566707fb50e5efac6c1a01b0c0"} Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.692608 4792 scope.go:117] "RemoveContainer" containerID="33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.700223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerStarted","Data":"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0"} Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.717304 4792 scope.go:117] "RemoveContainer" containerID="5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.730317 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5zmq" podStartSLOduration=3.477339707 podStartE2EDuration="48.730299722s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="2026-02-16 21:40:17.943918099 +0000 UTC m=+150.597196990" lastFinishedPulling="2026-02-16 21:41:03.196878104 +0000 UTC m=+195.850157005" observedRunningTime="2026-02-16 21:41:03.7169398 +0000 UTC m=+196.370218691" watchObservedRunningTime="2026-02-16 21:41:03.730299722 +0000 UTC m=+196.383578613" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.736311 4792 scope.go:117] "RemoveContainer" containerID="80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.736417 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.739811 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpmg2"] Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.759126 4792 scope.go:117] "RemoveContainer" containerID="33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597" Feb 16 21:41:03 crc kubenswrapper[4792]: E0216 21:41:03.767069 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597\": container with ID starting with 33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597 not found: ID does not exist" containerID="33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.767131 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597"} err="failed to get container status \"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597\": rpc error: code = NotFound desc = could not find container \"33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597\": container with ID starting with 33accd8d5bde601099a899ef166be73d9657568bb44dde1cbe2f5ef378dd6597 not found: ID does not exist" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.767165 4792 scope.go:117] "RemoveContainer" containerID="5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419" Feb 16 21:41:03 crc kubenswrapper[4792]: E0216 21:41:03.767562 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419\": container with ID starting with 5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419 not found: ID does not exist" containerID="5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.767684 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419"} err="failed to get container status \"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419\": rpc error: code = NotFound desc = could not find container \"5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419\": container with ID starting with 5e23c2c4192053830fcc06144a1537d2b05cd512e9914bad1125470a920d0419 not found: ID does not exist" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.767719 4792 scope.go:117] "RemoveContainer" containerID="80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155" Feb 16 21:41:03 crc kubenswrapper[4792]: E0216 21:41:03.768026 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155\": container with ID starting with 80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155 not found: ID does not exist" containerID="80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155" Feb 16 21:41:03 crc kubenswrapper[4792]: I0216 21:41:03.768070 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155"} err="failed to get container status \"80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155\": rpc error: code = NotFound desc = could not find container \"80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155\": container with ID starting with 80bbf2c0e98be73444c791f2f94c9103adff518214211d24721978817682b155 not found: ID does not exist" Feb 16 21:41:04 crc kubenswrapper[4792]: I0216 21:41:04.033641 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" path="/var/lib/kubelet/pods/4e7e955d-adca-4bb7-97cd-c261aa9bd04a/volumes" Feb 16 21:41:05 crc kubenswrapper[4792]: I0216 21:41:05.529499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:41:05 crc kubenswrapper[4792]: I0216 21:41:05.529724 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:41:05 crc kubenswrapper[4792]: I0216 21:41:05.582104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:41:15 crc kubenswrapper[4792]: I0216 21:41:15.590065 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:41:22 crc kubenswrapper[4792]: I0216 21:41:22.830166 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerName="oauth-openshift" containerID="cri-o://86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151" gracePeriod=15 Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.191687 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.234398 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7857967b8b-v7qtm"] Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235820 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="extract-content" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235859 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="extract-content" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235877 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="extract-utilities" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235883 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="extract-utilities" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235893 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="extract-utilities" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235900 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="extract-utilities" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235916 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerName="oauth-openshift" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235922 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerName="oauth-openshift" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235929 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235935 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235943 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235948 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.235956 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="extract-content" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.235961 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="extract-content" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.236124 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerName="oauth-openshift" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.236136 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7e955d-adca-4bb7-97cd-c261aa9bd04a" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.236151 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dbc74b-0b2a-4615-b871-7c312e47854b" containerName="registry-server" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.236567 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.243781 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7857967b8b-v7qtm"] Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371660 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371752 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371800 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371836 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371891 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.371932 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.372133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.372283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.372384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.373631 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.373643 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.373954 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374456 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374573 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374751 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374805 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5r5z\" (UniqueName: \"kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z\") pod \"eb35cffd-4266-41df-89cc-d136fd0f6954\" (UID: \"eb35cffd-4266-41df-89cc-d136fd0f6954\") " Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.374817 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56l5g\" (UniqueName: \"kubernetes.io/projected/f74d1900-aa57-470f-99ad-0b994383cd60-kube-api-access-56l5g\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-login\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375765 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375893 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.375921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-session\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376100 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-error\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f74d1900-aa57-470f-99ad-0b994383cd60-audit-dir\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376425 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376552 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-audit-policies\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376683 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376711 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376736 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376756 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.376776 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.378439 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.379209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.379766 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.380369 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.380532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z" (OuterVolumeSpecName: "kube-api-access-m5r5z") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "kube-api-access-m5r5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.380937 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.381269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.382024 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.382051 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "eb35cffd-4266-41df-89cc-d136fd0f6954" (UID: "eb35cffd-4266-41df-89cc-d136fd0f6954"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f74d1900-aa57-470f-99ad-0b994383cd60-audit-dir\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-audit-policies\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56l5g\" (UniqueName: \"kubernetes.io/projected/f74d1900-aa57-470f-99ad-0b994383cd60-kube-api-access-56l5g\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478476 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-login\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-session\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478619 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-error\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478761 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478774 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478785 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478796 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478810 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478822 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478849 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478864 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb35cffd-4266-41df-89cc-d136fd0f6954-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478875 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5r5z\" (UniqueName: \"kubernetes.io/projected/eb35cffd-4266-41df-89cc-d136fd0f6954-kube-api-access-m5r5z\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.480039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.480070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.478465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f74d1900-aa57-470f-99ad-0b994383cd60-audit-dir\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.480867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-audit-policies\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.481166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.481729 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.482134 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-login\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.482583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.482937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-error\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.483473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.483625 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-session\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.484480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.485020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f74d1900-aa57-470f-99ad-0b994383cd60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.496735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56l5g\" (UniqueName: \"kubernetes.io/projected/f74d1900-aa57-470f-99ad-0b994383cd60-kube-api-access-56l5g\") pod \"oauth-openshift-7857967b8b-v7qtm\" (UID: \"f74d1900-aa57-470f-99ad-0b994383cd60\") " pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.557171 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.731848 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7857967b8b-v7qtm"] Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.818020 4792 generic.go:334] "Generic (PLEG): container finished" podID="eb35cffd-4266-41df-89cc-d136fd0f6954" containerID="86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151" exitCode=0 Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.818080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.818098 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" event={"ID":"eb35cffd-4266-41df-89cc-d136fd0f6954","Type":"ContainerDied","Data":"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151"} Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.818198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jx4dt" event={"ID":"eb35cffd-4266-41df-89cc-d136fd0f6954","Type":"ContainerDied","Data":"3cca53dd5c9c47745c3ed6d739134568c13e777c2e19b94323bf36e1ec73be70"} Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.818228 4792 scope.go:117] "RemoveContainer" containerID="86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.823813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" event={"ID":"f74d1900-aa57-470f-99ad-0b994383cd60","Type":"ContainerStarted","Data":"f9854893ccc79c46074037941bbdabbaa5fcdf3aca296d4c868e2380e0e89f42"} Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.841255 4792 scope.go:117] "RemoveContainer" containerID="86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151" Feb 16 21:41:23 crc kubenswrapper[4792]: E0216 21:41:23.842762 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151\": container with ID starting with 86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151 not found: ID does not exist" containerID="86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.842824 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151"} err="failed to get container status \"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151\": rpc error: code = NotFound desc = could not find container \"86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151\": container with ID starting with 86bf6140668b988ed9257cd71f9946bbfdcade671f4ded5b6d48bd3066e23151 not found: ID does not exist" Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.845169 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:41:23 crc kubenswrapper[4792]: I0216 21:41:23.848056 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jx4dt"] Feb 16 21:41:24 crc kubenswrapper[4792]: I0216 21:41:24.041840 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb35cffd-4266-41df-89cc-d136fd0f6954" path="/var/lib/kubelet/pods/eb35cffd-4266-41df-89cc-d136fd0f6954/volumes" Feb 16 21:41:24 crc kubenswrapper[4792]: I0216 21:41:24.831356 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" event={"ID":"f74d1900-aa57-470f-99ad-0b994383cd60","Type":"ContainerStarted","Data":"5241d4f6eac084d0a1fdeabc51a0f30d7c3364835112d588d2bb742ea5c0f464"} Feb 16 21:41:24 crc kubenswrapper[4792]: I0216 21:41:24.831694 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:24 crc kubenswrapper[4792]: I0216 21:41:24.839223 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" Feb 16 21:41:24 crc kubenswrapper[4792]: I0216 21:41:24.849927 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7857967b8b-v7qtm" podStartSLOduration=27.849909842 podStartE2EDuration="27.849909842s" podCreationTimestamp="2026-02-16 21:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:41:24.845752394 +0000 UTC m=+217.499031295" watchObservedRunningTime="2026-02-16 21:41:24.849909842 +0000 UTC m=+217.503188723" Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.532861 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.533283 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.533350 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.534276 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.534364 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b" gracePeriod=600 Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.878731 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b" exitCode=0 Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.878828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b"} Feb 16 21:41:31 crc kubenswrapper[4792]: I0216 21:41:31.878891 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f"} Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.032674 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033496 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033520 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033670 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033685 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033696 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033704 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033718 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033727 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033735 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033743 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033751 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033759 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033770 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033777 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.033788 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033796 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033668 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033909 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9" gracePeriod=15 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033939 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193" gracePeriod=15 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033948 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9" gracePeriod=15 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033986 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a" gracePeriod=15 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.033898 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec" gracePeriod=15 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035730 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035759 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035768 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035775 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035783 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035792 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035801 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 21:41:40 crc kubenswrapper[4792]: E0216 21:41:40.035925 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.035934 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.037174 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.087916 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.087978 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088002 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088126 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.088160 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189432 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189565 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189636 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189581 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189625 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189603 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.189997 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.190023 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.190046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.928542 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.930582 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.931582 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a" exitCode=0 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.931647 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9" exitCode=0 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.931662 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193" exitCode=0 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.931673 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9" exitCode=2 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.931714 4792 scope.go:117] "RemoveContainer" containerID="0d3732304749b59217f9ab4baeacc43d09794ffc40cf903fb897127fdce36cb7" Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.933743 4792 generic.go:334] "Generic (PLEG): container finished" podID="720c35b9-54f6-4880-afd1-10a28ca5fbae" containerID="cf01e74aab864932eb143fe7fcad4e72de0942742ef88aebad32ac16d2939eef" exitCode=0 Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.933794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"720c35b9-54f6-4880-afd1-10a28ca5fbae","Type":"ContainerDied","Data":"cf01e74aab864932eb143fe7fcad4e72de0942742ef88aebad32ac16d2939eef"} Feb 16 21:41:40 crc kubenswrapper[4792]: I0216 21:41:40.935213 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:41 crc kubenswrapper[4792]: I0216 21:41:41.947990 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.245223 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.246350 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.400134 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.400843 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.401392 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.401838 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412341 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir\") pod \"720c35b9-54f6-4880-afd1-10a28ca5fbae\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412421 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock\") pod \"720c35b9-54f6-4880-afd1-10a28ca5fbae\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "720c35b9-54f6-4880-afd1-10a28ca5fbae" (UID: "720c35b9-54f6-4880-afd1-10a28ca5fbae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412475 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access\") pod \"720c35b9-54f6-4880-afd1-10a28ca5fbae\" (UID: \"720c35b9-54f6-4880-afd1-10a28ca5fbae\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412494 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock" (OuterVolumeSpecName: "var-lock") pod "720c35b9-54f6-4880-afd1-10a28ca5fbae" (UID: "720c35b9-54f6-4880-afd1-10a28ca5fbae"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412810 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.412827 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/720c35b9-54f6-4880-afd1-10a28ca5fbae-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.418717 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "720c35b9-54f6-4880-afd1-10a28ca5fbae" (UID: "720c35b9-54f6-4880-afd1-10a28ca5fbae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.513954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514205 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514314 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514486 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.514450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.515334 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/720c35b9-54f6-4880-afd1-10a28ca5fbae-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.617012 4792 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.617055 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.965354 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.968232 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec" exitCode=0 Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.968337 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.968356 4792 scope.go:117] "RemoveContainer" containerID="b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.971951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"720c35b9-54f6-4880-afd1-10a28ca5fbae","Type":"ContainerDied","Data":"73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29"} Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.971983 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.971993 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c63820f763a5ce2b8902f2abbdd1e19b46867f1a867c62899a50dc69904c29" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.985797 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.986070 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:42 crc kubenswrapper[4792]: I0216 21:41:42.998863 4792 scope.go:117] "RemoveContainer" containerID="275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.000009 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.000323 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.015555 4792 scope.go:117] "RemoveContainer" containerID="5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.028418 4792 scope.go:117] "RemoveContainer" containerID="ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.044242 4792 scope.go:117] "RemoveContainer" containerID="57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.063174 4792 scope.go:117] "RemoveContainer" containerID="3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.082330 4792 scope.go:117] "RemoveContainer" containerID="b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.082756 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\": container with ID starting with b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a not found: ID does not exist" containerID="b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.082804 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a"} err="failed to get container status \"b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\": rpc error: code = NotFound desc = could not find container \"b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a\": container with ID starting with b641c8a1f9bc769b7e1c64151e29be5d4c9ae856b84d9c957a70ceb452bb2d4a not found: ID does not exist" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.082836 4792 scope.go:117] "RemoveContainer" containerID="275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.083146 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\": container with ID starting with 275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9 not found: ID does not exist" containerID="275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.083180 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9"} err="failed to get container status \"275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\": rpc error: code = NotFound desc = could not find container \"275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9\": container with ID starting with 275dc4691133f94b0045778825318490a2ac87387a6365dcf97d10b49f4915e9 not found: ID does not exist" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.083203 4792 scope.go:117] "RemoveContainer" containerID="5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.083519 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\": container with ID starting with 5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193 not found: ID does not exist" containerID="5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.083564 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193"} err="failed to get container status \"5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\": rpc error: code = NotFound desc = could not find container \"5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193\": container with ID starting with 5ba05600d9b9e7d1f7a6a5b7a0d1e149442622d637d97def9e2a64eff5336193 not found: ID does not exist" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.083612 4792 scope.go:117] "RemoveContainer" containerID="ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.085010 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\": container with ID starting with ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9 not found: ID does not exist" containerID="ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.085044 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9"} err="failed to get container status \"ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\": rpc error: code = NotFound desc = could not find container \"ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9\": container with ID starting with ce7922da4340b794b0674e37353eac8cce4b04bf3627ff0e766b0ddbcf34e1a9 not found: ID does not exist" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.085092 4792 scope.go:117] "RemoveContainer" containerID="57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.085319 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\": container with ID starting with 57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec not found: ID does not exist" containerID="57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.085349 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec"} err="failed to get container status \"57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\": rpc error: code = NotFound desc = could not find container \"57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec\": container with ID starting with 57095ed86b63e2bc85d56cc8c182ef501e71b406b567b58ce40e9f7104079fec not found: ID does not exist" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.085364 4792 scope.go:117] "RemoveContainer" containerID="3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8" Feb 16 21:41:43 crc kubenswrapper[4792]: E0216 21:41:43.085692 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\": container with ID starting with 3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8 not found: ID does not exist" containerID="3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8" Feb 16 21:41:43 crc kubenswrapper[4792]: I0216 21:41:43.085711 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8"} err="failed to get container status \"3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\": rpc error: code = NotFound desc = could not find container \"3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8\": container with ID starting with 3f2f725967e5d6137923de8a4f0d66cb9a4a375f42f15bf7f1343e4c504149b8 not found: ID does not exist" Feb 16 21:41:44 crc kubenswrapper[4792]: I0216 21:41:44.032630 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 21:41:45 crc kubenswrapper[4792]: E0216 21:41:45.071261 4792 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:45 crc kubenswrapper[4792]: I0216 21:41:45.071639 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:45 crc kubenswrapper[4792]: W0216 21:41:45.087996 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-97751cdfbb65078967961a5e8db45d16d342e09f40b2e2b6a1a8df6fdf17cf47 WatchSource:0}: Error finding container 97751cdfbb65078967961a5e8db45d16d342e09f40b2e2b6a1a8df6fdf17cf47: Status 404 returned error can't find the container with id 97751cdfbb65078967961a5e8db45d16d342e09f40b2e2b6a1a8df6fdf17cf47 Feb 16 21:41:45 crc kubenswrapper[4792]: E0216 21:41:45.090260 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894d80d4c4f664e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:41:45.089893966 +0000 UTC m=+237.743172857,LastTimestamp:2026-02-16 21:41:45.089893966 +0000 UTC m=+237.743172857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:41:45 crc kubenswrapper[4792]: I0216 21:41:45.987454 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d"} Feb 16 21:41:45 crc kubenswrapper[4792]: I0216 21:41:45.988016 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"97751cdfbb65078967961a5e8db45d16d342e09f40b2e2b6a1a8df6fdf17cf47"} Feb 16 21:41:45 crc kubenswrapper[4792]: E0216 21:41:45.988500 4792 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:41:45 crc kubenswrapper[4792]: I0216 21:41:45.988533 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: I0216 21:41:48.028747 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.186514 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.187268 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.187818 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.188250 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.188686 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:48 crc kubenswrapper[4792]: I0216 21:41:48.188728 4792 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.188986 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.390054 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Feb 16 21:41:48 crc kubenswrapper[4792]: E0216 21:41:48.790722 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Feb 16 21:41:49 crc kubenswrapper[4792]: E0216 21:41:49.490039 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894d80d4c4f664e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 21:41:45.089893966 +0000 UTC m=+237.743172857,LastTimestamp:2026-02-16 21:41:45.089893966 +0000 UTC m=+237.743172857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 21:41:49 crc kubenswrapper[4792]: E0216 21:41:49.592012 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Feb 16 21:41:51 crc kubenswrapper[4792]: E0216 21:41:51.123573 4792 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" volumeName="registry-storage" Feb 16 21:41:51 crc kubenswrapper[4792]: E0216 21:41:51.192559 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Feb 16 21:41:52 crc kubenswrapper[4792]: I0216 21:41:52.025581 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:52 crc kubenswrapper[4792]: I0216 21:41:52.027584 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:52 crc kubenswrapper[4792]: I0216 21:41:52.043451 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:52 crc kubenswrapper[4792]: I0216 21:41:52.043487 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:52 crc kubenswrapper[4792]: E0216 21:41:52.043982 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:52 crc kubenswrapper[4792]: I0216 21:41:52.044326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.037645 4792 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="82df886cd4cc95785cf011a724849d6e715bcff718defc66d4d42e31272438e4" exitCode=0 Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.037703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"82df886cd4cc95785cf011a724849d6e715bcff718defc66d4d42e31272438e4"} Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.037740 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51e98889a71be83738f7ee09bbe0b79d12493013cf04984c1877c9052cb9c89f"} Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.038061 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.038078 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:53 crc kubenswrapper[4792]: E0216 21:41:53.038728 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:53 crc kubenswrapper[4792]: I0216 21:41:53.038976 4792 status_manager.go:851] "Failed to get status for pod" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Feb 16 21:41:54 crc kubenswrapper[4792]: I0216 21:41:54.051007 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca6a6d757cf9fb453db4c21d4db6b902ebd1c2d0ceb879dee7224cbd0330d50e"} Feb 16 21:41:54 crc kubenswrapper[4792]: I0216 21:41:54.052013 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dd8a2a4f8854c48c5ee0019e33442f75fcdff40d0171513a8cd14488bf5345b4"} Feb 16 21:41:54 crc kubenswrapper[4792]: I0216 21:41:54.052029 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5d29013d7be7164906c6a7801bd3b09722830ebabe32ea9aa4ff141cc3ae296"} Feb 16 21:41:54 crc kubenswrapper[4792]: I0216 21:41:54.052039 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b75a29900cafd262ff26c57f58fe7f1d6a9c047234250f8d0dd4c9119e80d5c1"} Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.062134 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.062212 4792 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c" exitCode=1 Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.062304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c"} Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.063088 4792 scope.go:117] "RemoveContainer" containerID="8088235c676d9ff6b7a36389ce8ff13e1ca012fd1fb56278470f109e3feca71c" Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.066821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"886dfcb12624c1d8cc85cdbc998bdaa329c7cc582ded1c1ae25a31be1ce41cc3"} Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.067101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.067272 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:55 crc kubenswrapper[4792]: I0216 21:41:55.067334 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:41:56 crc kubenswrapper[4792]: I0216 21:41:56.075272 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 21:41:56 crc kubenswrapper[4792]: I0216 21:41:56.076252 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"acd70576c9fa652dd15d8fd6b54bbab69af8dc9157d908e991e696daf0d31dc5"} Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.045640 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.045979 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.053336 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.553473 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.553696 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.553756 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:41:57 crc kubenswrapper[4792]: I0216 21:41:57.796407 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:42:00 crc kubenswrapper[4792]: I0216 21:42:00.072987 4792 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:42:00 crc kubenswrapper[4792]: I0216 21:42:00.099906 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:42:00 crc kubenswrapper[4792]: I0216 21:42:00.100153 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:42:00 crc kubenswrapper[4792]: I0216 21:42:00.103486 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:42:00 crc kubenswrapper[4792]: I0216 21:42:00.137453 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10782d71-4739-4775-ae27-09fe9e9fa5c7" Feb 16 21:42:01 crc kubenswrapper[4792]: I0216 21:42:01.104924 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:42:01 crc kubenswrapper[4792]: I0216 21:42:01.104957 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d8b10df-cff9-45fc-9dd8-2f80e3f16cfd" Feb 16 21:42:01 crc kubenswrapper[4792]: I0216 21:42:01.108572 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10782d71-4739-4775-ae27-09fe9e9fa5c7" Feb 16 21:42:07 crc kubenswrapper[4792]: I0216 21:42:07.553432 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 21:42:07 crc kubenswrapper[4792]: I0216 21:42:07.554302 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 21:42:09 crc kubenswrapper[4792]: I0216 21:42:09.742134 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:42:09 crc kubenswrapper[4792]: I0216 21:42:09.783724 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:42:10 crc kubenswrapper[4792]: I0216 21:42:10.168685 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:42:11 crc kubenswrapper[4792]: I0216 21:42:11.109167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:42:11 crc kubenswrapper[4792]: I0216 21:42:11.292249 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:42:11 crc kubenswrapper[4792]: I0216 21:42:11.572654 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:42:11 crc kubenswrapper[4792]: I0216 21:42:11.645804 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.303452 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.452957 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.456696 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.456749 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.460084 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.471668 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.47165262 podStartE2EDuration="12.47165262s" podCreationTimestamp="2026-02-16 21:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:42:12.470431026 +0000 UTC m=+265.123709927" watchObservedRunningTime="2026-02-16 21:42:12.47165262 +0000 UTC m=+265.124931511" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.722185 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.874861 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.944899 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.983355 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.986058 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 21:42:12 crc kubenswrapper[4792]: I0216 21:42:12.986352 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.103534 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.275784 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.301692 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.325456 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.416113 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.416688 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsr8l" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="registry-server" containerID="cri-o://76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b" gracePeriod=30 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.423441 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.424898 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5zmq" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="registry-server" containerID="cri-o://377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0" gracePeriod=30 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.438121 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.438461 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" containerID="cri-o://7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806" gracePeriod=30 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.454050 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.454365 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-52qsq" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="registry-server" containerID="cri-o://9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296" gracePeriod=30 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.458590 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.462143 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.462695 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-np9jz" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="registry-server" containerID="cri-o://a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c" gracePeriod=30 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.753009 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.857802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.865335 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.868879 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.871100 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.872289 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.881050 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:42:13 crc kubenswrapper[4792]: I0216 21:42:13.979333 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.028908 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.039986 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040327 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m5mt\" (UniqueName: \"kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt\") pod \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities\") pod \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040403 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities\") pod \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content\") pod \"04e057cc-fc7c-476d-8eae-f817ca57ed51\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040486 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content\") pod \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040508 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities\") pod \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040532 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content\") pod \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040563 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcxlw\" (UniqueName: \"kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw\") pod \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\" (UID: \"edd14fca-8d4f-4537-94f9-cebf5ffe935c\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040646 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnft\" (UniqueName: \"kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft\") pod \"04e057cc-fc7c-476d-8eae-f817ca57ed51\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.040734 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca\") pod \"18d326ed-a5e0-4663-bec0-8ee429a44c89\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.041481 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content\") pod \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\" (UID: \"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.041923 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jv7b\" (UniqueName: \"kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b\") pod \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\" (UID: \"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.041977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics\") pod \"18d326ed-a5e0-4663-bec0-8ee429a44c89\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042014 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities" (OuterVolumeSpecName: "utilities") pod "edd14fca-8d4f-4537-94f9-cebf5ffe935c" (UID: "edd14fca-8d4f-4537-94f9-cebf5ffe935c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042032 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities\") pod \"04e057cc-fc7c-476d-8eae-f817ca57ed51\" (UID: \"04e057cc-fc7c-476d-8eae-f817ca57ed51\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fvp8\" (UniqueName: \"kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8\") pod \"18d326ed-a5e0-4663-bec0-8ee429a44c89\" (UID: \"18d326ed-a5e0-4663-bec0-8ee429a44c89\") " Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities" (OuterVolumeSpecName: "utilities") pod "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" (UID: "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities" (OuterVolumeSpecName: "utilities") pod "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" (UID: "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042775 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042808 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.042825 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.043889 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities" (OuterVolumeSpecName: "utilities") pod "04e057cc-fc7c-476d-8eae-f817ca57ed51" (UID: "04e057cc-fc7c-476d-8eae-f817ca57ed51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.048038 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft" (OuterVolumeSpecName: "kube-api-access-hpnft") pod "04e057cc-fc7c-476d-8eae-f817ca57ed51" (UID: "04e057cc-fc7c-476d-8eae-f817ca57ed51"). InnerVolumeSpecName "kube-api-access-hpnft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.048118 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt" (OuterVolumeSpecName: "kube-api-access-5m5mt") pod "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" (UID: "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62"). InnerVolumeSpecName "kube-api-access-5m5mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.048131 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw" (OuterVolumeSpecName: "kube-api-access-vcxlw") pod "edd14fca-8d4f-4537-94f9-cebf5ffe935c" (UID: "edd14fca-8d4f-4537-94f9-cebf5ffe935c"). InnerVolumeSpecName "kube-api-access-vcxlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.048672 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b" (OuterVolumeSpecName: "kube-api-access-6jv7b") pod "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" (UID: "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30"). InnerVolumeSpecName "kube-api-access-6jv7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.050451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8" (OuterVolumeSpecName: "kube-api-access-9fvp8") pod "18d326ed-a5e0-4663-bec0-8ee429a44c89" (UID: "18d326ed-a5e0-4663-bec0-8ee429a44c89"). InnerVolumeSpecName "kube-api-access-9fvp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.050996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "18d326ed-a5e0-4663-bec0-8ee429a44c89" (UID: "18d326ed-a5e0-4663-bec0-8ee429a44c89"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.051174 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "18d326ed-a5e0-4663-bec0-8ee429a44c89" (UID: "18d326ed-a5e0-4663-bec0-8ee429a44c89"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.076418 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.076420 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" (UID: "2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.080381 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.112666 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edd14fca-8d4f-4537-94f9-cebf5ffe935c" (UID: "edd14fca-8d4f-4537-94f9-cebf5ffe935c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.122121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" (UID: "4d46c62b-3da8-4f57-b7fe-e9b479d3eb30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.131200 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143517 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m5mt\" (UniqueName: \"kubernetes.io/projected/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-kube-api-access-5m5mt\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143549 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd14fca-8d4f-4537-94f9-cebf5ffe935c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143562 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143577 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcxlw\" (UniqueName: \"kubernetes.io/projected/edd14fca-8d4f-4537-94f9-cebf5ffe935c-kube-api-access-vcxlw\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143588 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpnft\" (UniqueName: \"kubernetes.io/projected/04e057cc-fc7c-476d-8eae-f817ca57ed51-kube-api-access-hpnft\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143617 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143629 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143642 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jv7b\" (UniqueName: \"kubernetes.io/projected/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30-kube-api-access-6jv7b\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143653 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d326ed-a5e0-4663-bec0-8ee429a44c89-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143664 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.143674 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fvp8\" (UniqueName: \"kubernetes.io/projected/18d326ed-a5e0-4663-bec0-8ee429a44c89-kube-api-access-9fvp8\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.188969 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerID="76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b" exitCode=0 Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.188998 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsr8l" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.189019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerDied","Data":"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.189063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsr8l" event={"ID":"4d46c62b-3da8-4f57-b7fe-e9b479d3eb30","Type":"ContainerDied","Data":"30cab9361689286c1167ae9a03666687c8219dc1871524b9a2856332987b8ca1"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.189085 4792 scope.go:117] "RemoveContainer" containerID="76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.191431 4792 generic.go:334] "Generic (PLEG): container finished" podID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerID="7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806" exitCode=0 Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.191497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" event={"ID":"18d326ed-a5e0-4663-bec0-8ee429a44c89","Type":"ContainerDied","Data":"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.191705 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" event={"ID":"18d326ed-a5e0-4663-bec0-8ee429a44c89","Type":"ContainerDied","Data":"a770c4b144895b8baf5eb5ab279e0cc61bd2fce83e4f309c96959409f9085944"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.191776 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ss6x2" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.195414 4792 generic.go:334] "Generic (PLEG): container finished" podID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerID="a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c" exitCode=0 Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.195461 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerDied","Data":"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.195484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9jz" event={"ID":"04e057cc-fc7c-476d-8eae-f817ca57ed51","Type":"ContainerDied","Data":"c2d55592efefd820da8dd65967c3f3ed73a041eb94c8b2443d8e49005a4f6c90"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.195532 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9jz" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.197699 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04e057cc-fc7c-476d-8eae-f817ca57ed51" (UID: "04e057cc-fc7c-476d-8eae-f817ca57ed51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.198722 4792 generic.go:334] "Generic (PLEG): container finished" podID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerID="377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0" exitCode=0 Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.198766 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5zmq" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.198803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerDied","Data":"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.198835 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5zmq" event={"ID":"edd14fca-8d4f-4537-94f9-cebf5ffe935c","Type":"ContainerDied","Data":"2269db805cfd73135390bedfb4f35ab2564c9a5172f9869a15963d5aa53bbebf"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.202303 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerID="9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296" exitCode=0 Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.202344 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerDied","Data":"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.202376 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52qsq" event={"ID":"2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62","Type":"ContainerDied","Data":"78448ce1f49783a30ab5695e910f4aad33c54e3c8488a55836f88fc4427f0ea5"} Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.202465 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52qsq" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.217665 4792 scope.go:117] "RemoveContainer" containerID="6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.228287 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.234605 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ss6x2"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.241399 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.243549 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.244531 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e057cc-fc7c-476d-8eae-f817ca57ed51-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.247962 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsr8l"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.252768 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.255040 4792 scope.go:117] "RemoveContainer" containerID="06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.257680 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5zmq"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.268629 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.270943 4792 scope.go:117] "RemoveContainer" containerID="76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.271626 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-52qsq"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.271788 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.271880 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b\": container with ID starting with 76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b not found: ID does not exist" containerID="76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.271935 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b"} err="failed to get container status \"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b\": rpc error: code = NotFound desc = could not find container \"76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b\": container with ID starting with 76c3a58261b7d969d2d9e4b35070bd6b8caa7eaec46b48385fd4463f3ca5018b not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.271959 4792 scope.go:117] "RemoveContainer" containerID="6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.272422 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d\": container with ID starting with 6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d not found: ID does not exist" containerID="6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.272464 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d"} err="failed to get container status \"6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d\": rpc error: code = NotFound desc = could not find container \"6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d\": container with ID starting with 6ddc895b33408af8c684bebc941873ddb3cc30d90ab58215dbea78332b3f312d not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.272493 4792 scope.go:117] "RemoveContainer" containerID="06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.272898 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407\": container with ID starting with 06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407 not found: ID does not exist" containerID="06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.272920 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407"} err="failed to get container status \"06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407\": rpc error: code = NotFound desc = could not find container \"06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407\": container with ID starting with 06fbe02df344c07aaac6d7dd8b5289ae4d9cc7109087c6f865ad7686683ef407 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.272935 4792 scope.go:117] "RemoveContainer" containerID="7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.286251 4792 scope.go:117] "RemoveContainer" containerID="7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.286527 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806\": container with ID starting with 7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806 not found: ID does not exist" containerID="7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.286561 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806"} err="failed to get container status \"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806\": rpc error: code = NotFound desc = could not find container \"7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806\": container with ID starting with 7cf18dbc703cf2ff87f74ed7ba9499f2bcc824524d79f806da04b4549be81806 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.286581 4792 scope.go:117] "RemoveContainer" containerID="a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.297403 4792 scope.go:117] "RemoveContainer" containerID="77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.311168 4792 scope.go:117] "RemoveContainer" containerID="4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.324196 4792 scope.go:117] "RemoveContainer" containerID="a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.324520 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c\": container with ID starting with a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c not found: ID does not exist" containerID="a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.324548 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c"} err="failed to get container status \"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c\": rpc error: code = NotFound desc = could not find container \"a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c\": container with ID starting with a17660df811d2ea0daefb7e6856e62cd110b06c8bb2ad7202ad36438841f574c not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.324573 4792 scope.go:117] "RemoveContainer" containerID="77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.324928 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884\": container with ID starting with 77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884 not found: ID does not exist" containerID="77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.324963 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884"} err="failed to get container status \"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884\": rpc error: code = NotFound desc = could not find container \"77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884\": container with ID starting with 77492d4e1bc8b45da7efc1b42ef05ff2a5a462ea9899740382dbe0be5630f884 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.324989 4792 scope.go:117] "RemoveContainer" containerID="4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.325322 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18\": container with ID starting with 4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18 not found: ID does not exist" containerID="4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.325343 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18"} err="failed to get container status \"4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18\": rpc error: code = NotFound desc = could not find container \"4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18\": container with ID starting with 4dce52e408b29d577a358a80c53700b5fd0a3ab7d116066ab5d5d0aa9a21cc18 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.325357 4792 scope.go:117] "RemoveContainer" containerID="377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.337115 4792 scope.go:117] "RemoveContainer" containerID="4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.350891 4792 scope.go:117] "RemoveContainer" containerID="b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.362821 4792 scope.go:117] "RemoveContainer" containerID="377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.363224 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0\": container with ID starting with 377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0 not found: ID does not exist" containerID="377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.363325 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0"} err="failed to get container status \"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0\": rpc error: code = NotFound desc = could not find container \"377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0\": container with ID starting with 377ddc273b5878e0d25a7165597bcaf9449f1572f900de016859b870ffb86cc0 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.363354 4792 scope.go:117] "RemoveContainer" containerID="4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.363813 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238\": container with ID starting with 4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238 not found: ID does not exist" containerID="4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.363845 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238"} err="failed to get container status \"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238\": rpc error: code = NotFound desc = could not find container \"4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238\": container with ID starting with 4ae7a2085967f6bc1d2804386d289ad57a75f4606e5fead5b10dc156fffdf238 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.363917 4792 scope.go:117] "RemoveContainer" containerID="b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.366977 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3\": container with ID starting with b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3 not found: ID does not exist" containerID="b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.367036 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3"} err="failed to get container status \"b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3\": rpc error: code = NotFound desc = could not find container \"b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3\": container with ID starting with b1e2203db82f868cf4f62f2ab728a5f76b96d43faa3b464e3ed5c439630fe5f3 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.367179 4792 scope.go:117] "RemoveContainer" containerID="9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.382313 4792 scope.go:117] "RemoveContainer" containerID="a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.393808 4792 scope.go:117] "RemoveContainer" containerID="e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.405131 4792 scope.go:117] "RemoveContainer" containerID="9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.405718 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296\": container with ID starting with 9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296 not found: ID does not exist" containerID="9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.405746 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296"} err="failed to get container status \"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296\": rpc error: code = NotFound desc = could not find container \"9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296\": container with ID starting with 9e6e8d6c895ae3a6d0a204d6b419c94d139995ab78d4d5688813a61196cf0296 not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.405768 4792 scope.go:117] "RemoveContainer" containerID="a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.405969 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b\": container with ID starting with a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b not found: ID does not exist" containerID="a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.405990 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b"} err="failed to get container status \"a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b\": rpc error: code = NotFound desc = could not find container \"a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b\": container with ID starting with a9c9bc39151955a94eafef4e237934da36eb17c02842eac4bd753edb97d8926b not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.406003 4792 scope.go:117] "RemoveContainer" containerID="e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b" Feb 16 21:42:14 crc kubenswrapper[4792]: E0216 21:42:14.406231 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b\": container with ID starting with e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b not found: ID does not exist" containerID="e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.406253 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b"} err="failed to get container status \"e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b\": rpc error: code = NotFound desc = could not find container \"e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b\": container with ID starting with e4a0bbeb1c49f819bee7a6711a9fef9f90de5a84597d3b6210afd2fe64e49c3b not found: ID does not exist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.434120 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.455779 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.520415 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.523760 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-np9jz"] Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.560061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.669321 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.754511 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.836835 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.837342 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.953219 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 21:42:14 crc kubenswrapper[4792]: I0216 21:42:14.967585 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.008513 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.117323 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.241142 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.294584 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.394051 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.415932 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.550233 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.823543 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.864265 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.911786 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:42:15 crc kubenswrapper[4792]: I0216 21:42:15.979630 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.006057 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.033083 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" path="/var/lib/kubelet/pods/04e057cc-fc7c-476d-8eae-f817ca57ed51/volumes" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.033703 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" path="/var/lib/kubelet/pods/18d326ed-a5e0-4663-bec0-8ee429a44c89/volumes" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.034131 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" path="/var/lib/kubelet/pods/2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62/volumes" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.035224 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" path="/var/lib/kubelet/pods/4d46c62b-3da8-4f57-b7fe-e9b479d3eb30/volumes" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.035941 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" path="/var/lib/kubelet/pods/edd14fca-8d4f-4537-94f9-cebf5ffe935c/volumes" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.057717 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.085579 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.155315 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.157696 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.200244 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.200962 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.227448 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.256490 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.336465 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.453503 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.473717 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.482662 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.651208 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.778326 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.780677 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.863523 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.940825 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:42:16 crc kubenswrapper[4792]: I0216 21:42:16.941905 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.080726 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.098585 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.103072 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.103206 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.150046 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.155101 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.159221 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.186943 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.203339 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.255655 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.287490 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.291898 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.374696 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.376298 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.420263 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.493285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.500972 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.543019 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.557216 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.562477 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.610900 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.624704 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.830132 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.836468 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.892791 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 21:42:17 crc kubenswrapper[4792]: I0216 21:42:17.999203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.072496 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.101914 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.177039 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.278264 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.289805 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.346215 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.398117 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.478091 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.588056 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.598760 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.626933 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.667946 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.674230 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.700427 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.713720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.734587 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.767474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.776970 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.778040 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.787341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.835591 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.842002 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.876741 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.920895 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.973100 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:42:18 crc kubenswrapper[4792]: I0216 21:42:18.975055 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.004625 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.047050 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.051232 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.109452 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.183099 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.199946 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.356082 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.358156 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.386552 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.398238 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.448781 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.463656 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.534162 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.555719 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.569423 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.597420 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.753879 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.807632 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.852298 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.873126 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.911455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.970241 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.988228 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:42:19 crc kubenswrapper[4792]: I0216 21:42:19.997906 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.040489 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.109804 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.188300 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.275562 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.340324 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.343044 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.344411 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.471814 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.562074 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.635718 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.729858 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.776944 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.800087 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:42:20 crc kubenswrapper[4792]: I0216 21:42:20.944773 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.017479 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.059293 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.063702 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.165390 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.262100 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.293231 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.412479 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.447869 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.507813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.539856 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.546361 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.646870 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.841043 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.863292 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.909001 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.913151 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 21:42:21 crc kubenswrapper[4792]: I0216 21:42:21.987983 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.048052 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.072047 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.137705 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.169825 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.196887 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.237462 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.420718 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.422220 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.422444 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d" gracePeriod=5 Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.472015 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.472027 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.567014 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.582865 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.596200 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.598948 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.687765 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.701057 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.726151 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.745922 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.769015 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.800466 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.816998 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.875557 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:42:22 crc kubenswrapper[4792]: I0216 21:42:22.982909 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.002623 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.085615 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.323987 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.355346 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.367092 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.485450 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.501214 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.564932 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.583614 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.695403 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.791030 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.939072 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.959955 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:42:23 crc kubenswrapper[4792]: I0216 21:42:23.972366 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.266162 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.316879 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.385007 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.435737 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.511769 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.532753 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.655295 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.679657 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.843372 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.910088 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:42:24 crc kubenswrapper[4792]: I0216 21:42:24.935958 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.057083 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.067984 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.151967 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.202890 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.252078 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.253050 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.265303 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.366265 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.470506 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.668657 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.814650 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:42:25 crc kubenswrapper[4792]: I0216 21:42:25.824302 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.166008 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.266135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.284952 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.480938 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.709508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.750898 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.820966 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:42:26 crc kubenswrapper[4792]: I0216 21:42:26.911726 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.179053 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.557933 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.558046 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.702693 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.702832 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.702787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.703168 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.703290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.703378 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.703411 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.704844 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.704894 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.705580 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.705644 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.705658 4792 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.705672 4792 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.715296 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.807160 4792 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 21:42:27 crc kubenswrapper[4792]: I0216 21:42:27.860937 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.035094 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.281370 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.281431 4792 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d" exitCode=137 Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.281489 4792 scope.go:117] "RemoveContainer" containerID="252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.281503 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.312960 4792 scope.go:117] "RemoveContainer" containerID="252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d" Feb 16 21:42:28 crc kubenswrapper[4792]: E0216 21:42:28.313478 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d\": container with ID starting with 252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d not found: ID does not exist" containerID="252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.313508 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d"} err="failed to get container status \"252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d\": rpc error: code = NotFound desc = could not find container \"252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d\": container with ID starting with 252ee62870241079f4ade1760bf61e0e0bb72e0ceb9e680f959d94e1eced739d not found: ID does not exist" Feb 16 21:42:28 crc kubenswrapper[4792]: I0216 21:42:28.389799 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:42:29 crc kubenswrapper[4792]: I0216 21:42:29.037917 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.865930 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6k42"] Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866459 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866476 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866491 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866498 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866506 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866514 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866523 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866531 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866543 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866550 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866561 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866568 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866580 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866587 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866617 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866628 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866641 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" containerName="installer" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866648 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" containerName="installer" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866660 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866667 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866676 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866683 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866693 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866699 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866709 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866716 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866728 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866736 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="extract-content" Feb 16 21:42:32 crc kubenswrapper[4792]: E0216 21:42:32.866749 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866756 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="extract-utilities" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866885 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="720c35b9-54f6-4880-afd1-10a28ca5fbae" containerName="installer" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866900 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd14fca-8d4f-4537-94f9-cebf5ffe935c" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866910 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d326ed-a5e0-4663-bec0-8ee429a44c89" containerName="marketplace-operator" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866924 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9c65e4-9fd9-463f-b5e6-712ecc7cbb62" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866935 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e057cc-fc7c-476d-8eae-f817ca57ed51" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866945 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d46c62b-3da8-4f57-b7fe-e9b479d3eb30" containerName="registry-server" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.866954 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.867309 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.869651 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.869941 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.870229 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.871211 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.882149 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.884993 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6k42"] Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.973816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0847734c-681b-4f22-af87-370debd04712-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.973882 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0847734c-681b-4f22-af87-370debd04712-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:32 crc kubenswrapper[4792]: I0216 21:42:32.974158 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqjn\" (UniqueName: \"kubernetes.io/projected/0847734c-681b-4f22-af87-370debd04712-kube-api-access-xwqjn\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.075034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0847734c-681b-4f22-af87-370debd04712-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.075104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0847734c-681b-4f22-af87-370debd04712-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.075262 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqjn\" (UniqueName: \"kubernetes.io/projected/0847734c-681b-4f22-af87-370debd04712-kube-api-access-xwqjn\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.076590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0847734c-681b-4f22-af87-370debd04712-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.080184 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0847734c-681b-4f22-af87-370debd04712-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.101264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqjn\" (UniqueName: \"kubernetes.io/projected/0847734c-681b-4f22-af87-370debd04712-kube-api-access-xwqjn\") pod \"marketplace-operator-79b997595-m6k42\" (UID: \"0847734c-681b-4f22-af87-370debd04712\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.183679 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:33 crc kubenswrapper[4792]: I0216 21:42:33.395071 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6k42"] Feb 16 21:42:34 crc kubenswrapper[4792]: I0216 21:42:34.313168 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" event={"ID":"0847734c-681b-4f22-af87-370debd04712","Type":"ContainerStarted","Data":"d9599df2f06f7824d3f202e467fddde5d929ce8cf328a33e52f3b91bdd173a0b"} Feb 16 21:42:34 crc kubenswrapper[4792]: I0216 21:42:34.313509 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:34 crc kubenswrapper[4792]: I0216 21:42:34.313519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" event={"ID":"0847734c-681b-4f22-af87-370debd04712","Type":"ContainerStarted","Data":"e3ac8a5286af6f7adb4be4a8aa4bb21bf512cf8ae6b41935ece96ba2988a2858"} Feb 16 21:42:34 crc kubenswrapper[4792]: I0216 21:42:34.316766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" Feb 16 21:42:34 crc kubenswrapper[4792]: I0216 21:42:34.330439 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m6k42" podStartSLOduration=2.330417601 podStartE2EDuration="2.330417601s" podCreationTimestamp="2026-02-16 21:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:42:34.330140723 +0000 UTC m=+286.983419624" watchObservedRunningTime="2026-02-16 21:42:34.330417601 +0000 UTC m=+286.983696492" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.916008 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m"] Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.917790 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.924189 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.924642 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.925380 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m"] Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.925544 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.925716 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 21:42:43 crc kubenswrapper[4792]: I0216 21:42:43.926158 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.011848 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.011932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.012398 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24x2\" (UniqueName: \"kubernetes.io/projected/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-kube-api-access-m24x2\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.113438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24x2\" (UniqueName: \"kubernetes.io/projected/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-kube-api-access-m24x2\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.113829 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.113931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.115193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.125065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.135736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24x2\" (UniqueName: \"kubernetes.io/projected/ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0-kube-api-access-m24x2\") pod \"cluster-monitoring-operator-6d5b84845-6ml7m\" (UID: \"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.246642 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" Feb 16 21:42:44 crc kubenswrapper[4792]: I0216 21:42:44.488891 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m"] Feb 16 21:42:45 crc kubenswrapper[4792]: I0216 21:42:45.375232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" event={"ID":"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0","Type":"ContainerStarted","Data":"0a2e06e1f7ad2b117c2ebbf5e7fa9352f5bd81a81605e8a1407152c902d840e7"} Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.841521 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9"] Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.842420 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.844618 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.849095 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.886922 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9"] Feb 16 21:42:46 crc kubenswrapper[4792]: I0216 21:42:46.950377 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:46 crc kubenswrapper[4792]: E0216 21:42:46.950551 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:46 crc kubenswrapper[4792]: E0216 21:42:46.950672 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:42:47.450647986 +0000 UTC m=+300.103926877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:47 crc kubenswrapper[4792]: I0216 21:42:47.389508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" event={"ID":"ffc2ddc0-29b0-4f38-b8ef-367b6aabb5a0","Type":"ContainerStarted","Data":"da4d67aef093cbd48d74a887a05721701ffdadf2c6744023c6efe45cd42c167a"} Feb 16 21:42:47 crc kubenswrapper[4792]: I0216 21:42:47.412489 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-6ml7m" podStartSLOduration=2.655904333 podStartE2EDuration="4.412456054s" podCreationTimestamp="2026-02-16 21:42:43 +0000 UTC" firstStartedPulling="2026-02-16 21:42:44.499066781 +0000 UTC m=+297.152345682" lastFinishedPulling="2026-02-16 21:42:46.255618512 +0000 UTC m=+298.908897403" observedRunningTime="2026-02-16 21:42:47.412298289 +0000 UTC m=+300.065577260" watchObservedRunningTime="2026-02-16 21:42:47.412456054 +0000 UTC m=+300.065735025" Feb 16 21:42:47 crc kubenswrapper[4792]: I0216 21:42:47.456211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:47 crc kubenswrapper[4792]: E0216 21:42:47.456375 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:47 crc kubenswrapper[4792]: E0216 21:42:47.456455 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:42:48.456425468 +0000 UTC m=+301.109704379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:47 crc kubenswrapper[4792]: I0216 21:42:47.804581 4792 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 21:42:48 crc kubenswrapper[4792]: I0216 21:42:48.468464 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:48 crc kubenswrapper[4792]: E0216 21:42:48.468632 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:48 crc kubenswrapper[4792]: E0216 21:42:48.468729 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:42:50.468702094 +0000 UTC m=+303.121981005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:50 crc kubenswrapper[4792]: I0216 21:42:50.494237 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:50 crc kubenswrapper[4792]: E0216 21:42:50.494411 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:50 crc kubenswrapper[4792]: E0216 21:42:50.494653 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:42:54.494626936 +0000 UTC m=+307.147905837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:54 crc kubenswrapper[4792]: I0216 21:42:54.549219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:42:54 crc kubenswrapper[4792]: E0216 21:42:54.549428 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:42:54 crc kubenswrapper[4792]: E0216 21:42:54.549886 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:43:02.549855542 +0000 UTC m=+315.203134433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:43:02 crc kubenswrapper[4792]: I0216 21:43:02.554665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:02 crc kubenswrapper[4792]: E0216 21:43:02.554865 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:43:02 crc kubenswrapper[4792]: E0216 21:43:02.555531 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:43:18.555504347 +0000 UTC m=+331.208783278 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:43:04 crc kubenswrapper[4792]: I0216 21:43:04.757736 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:43:04 crc kubenswrapper[4792]: I0216 21:43:04.758157 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" containerName="controller-manager" containerID="cri-o://bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613" gracePeriod=30 Feb 16 21:43:04 crc kubenswrapper[4792]: I0216 21:43:04.854757 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:43:04 crc kubenswrapper[4792]: I0216 21:43:04.855016 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerName="route-controller-manager" containerID="cri-o://b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd" gracePeriod=30 Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.116757 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.190385 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert\") pod \"74c00cd5-2613-4930-9091-9061ea9496bf\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.190431 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca\") pod \"74c00cd5-2613-4930-9091-9061ea9496bf\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.190474 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdclc\" (UniqueName: \"kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc\") pod \"74c00cd5-2613-4930-9091-9061ea9496bf\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.190501 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config\") pod \"74c00cd5-2613-4930-9091-9061ea9496bf\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.190554 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles\") pod \"74c00cd5-2613-4930-9091-9061ea9496bf\" (UID: \"74c00cd5-2613-4930-9091-9061ea9496bf\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.191321 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74c00cd5-2613-4930-9091-9061ea9496bf" (UID: "74c00cd5-2613-4930-9091-9061ea9496bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.191344 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "74c00cd5-2613-4930-9091-9061ea9496bf" (UID: "74c00cd5-2613-4930-9091-9061ea9496bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.192046 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config" (OuterVolumeSpecName: "config") pod "74c00cd5-2613-4930-9091-9061ea9496bf" (UID: "74c00cd5-2613-4930-9091-9061ea9496bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.195543 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74c00cd5-2613-4930-9091-9061ea9496bf" (UID: "74c00cd5-2613-4930-9091-9061ea9496bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.195943 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc" (OuterVolumeSpecName: "kube-api-access-kdclc") pod "74c00cd5-2613-4930-9091-9061ea9496bf" (UID: "74c00cd5-2613-4930-9091-9061ea9496bf"). InnerVolumeSpecName "kube-api-access-kdclc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.197567 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291481 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca\") pod \"86214154-257c-46e0-8f95-8a16bd86f9ec\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291572 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jc2r\" (UniqueName: \"kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r\") pod \"86214154-257c-46e0-8f95-8a16bd86f9ec\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291642 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert\") pod \"86214154-257c-46e0-8f95-8a16bd86f9ec\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291681 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config\") pod \"86214154-257c-46e0-8f95-8a16bd86f9ec\" (UID: \"86214154-257c-46e0-8f95-8a16bd86f9ec\") " Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291834 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291846 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdclc\" (UniqueName: \"kubernetes.io/projected/74c00cd5-2613-4930-9091-9061ea9496bf-kube-api-access-kdclc\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291857 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291866 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74c00cd5-2613-4930-9091-9061ea9496bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.291874 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c00cd5-2613-4930-9091-9061ea9496bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.292427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "86214154-257c-46e0-8f95-8a16bd86f9ec" (UID: "86214154-257c-46e0-8f95-8a16bd86f9ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.292447 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config" (OuterVolumeSpecName: "config") pod "86214154-257c-46e0-8f95-8a16bd86f9ec" (UID: "86214154-257c-46e0-8f95-8a16bd86f9ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.294834 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86214154-257c-46e0-8f95-8a16bd86f9ec" (UID: "86214154-257c-46e0-8f95-8a16bd86f9ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.294843 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r" (OuterVolumeSpecName: "kube-api-access-5jc2r") pod "86214154-257c-46e0-8f95-8a16bd86f9ec" (UID: "86214154-257c-46e0-8f95-8a16bd86f9ec"). InnerVolumeSpecName "kube-api-access-5jc2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.392915 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.392956 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86214154-257c-46e0-8f95-8a16bd86f9ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.392969 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jc2r\" (UniqueName: \"kubernetes.io/projected/86214154-257c-46e0-8f95-8a16bd86f9ec-kube-api-access-5jc2r\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.392981 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86214154-257c-46e0-8f95-8a16bd86f9ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.496043 4792 generic.go:334] "Generic (PLEG): container finished" podID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerID="b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd" exitCode=0 Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.496114 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" event={"ID":"86214154-257c-46e0-8f95-8a16bd86f9ec","Type":"ContainerDied","Data":"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd"} Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.496186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" event={"ID":"86214154-257c-46e0-8f95-8a16bd86f9ec","Type":"ContainerDied","Data":"5c7181453180429c40b6b468d9d7a719ce6fbd3cd941593af41254c66a887a0b"} Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.496181 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.496218 4792 scope.go:117] "RemoveContainer" containerID="b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.500980 4792 generic.go:334] "Generic (PLEG): container finished" podID="74c00cd5-2613-4930-9091-9061ea9496bf" containerID="bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613" exitCode=0 Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.501080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.501076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" event={"ID":"74c00cd5-2613-4930-9091-9061ea9496bf","Type":"ContainerDied","Data":"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613"} Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.501275 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nwvtk" event={"ID":"74c00cd5-2613-4930-9091-9061ea9496bf","Type":"ContainerDied","Data":"2da85a859e3e94895d90d7b5acd75291707c11b0893e35024b78dba4c827835a"} Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.528269 4792 scope.go:117] "RemoveContainer" containerID="b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd" Feb 16 21:43:05 crc kubenswrapper[4792]: E0216 21:43:05.532127 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd\": container with ID starting with b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd not found: ID does not exist" containerID="b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.532289 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd"} err="failed to get container status \"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd\": rpc error: code = NotFound desc = could not find container \"b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd\": container with ID starting with b88fd0a02d7d4d254650081f43c4404215fe465186e8f37b1d3189df49e129cd not found: ID does not exist" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.532355 4792 scope.go:117] "RemoveContainer" containerID="bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.538995 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.546163 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r7nkn"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.552083 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.557684 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nwvtk"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.559467 4792 scope.go:117] "RemoveContainer" containerID="bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613" Feb 16 21:43:05 crc kubenswrapper[4792]: E0216 21:43:05.560182 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613\": container with ID starting with bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613 not found: ID does not exist" containerID="bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.560216 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613"} err="failed to get container status \"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613\": rpc error: code = NotFound desc = could not find container \"bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613\": container with ID starting with bc18e0bba9d5fdfe6d465007914b1bcece96fa8e5cbc7e690142ebaead446613 not found: ID does not exist" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.904012 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:05 crc kubenswrapper[4792]: E0216 21:43:05.904440 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerName="route-controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.904471 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerName="route-controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: E0216 21:43:05.904501 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" containerName="controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.904518 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" containerName="controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.904774 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" containerName="controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.904806 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" containerName="route-controller-manager" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.905656 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.907515 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.908555 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.908759 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.909678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.909818 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.909947 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.910257 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.910937 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.914119 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.914447 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.915226 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.915465 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.915474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.916084 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.926346 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.928196 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.944769 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998643 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6q22\" (UniqueName: \"kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998680 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998723 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5k4\" (UniqueName: \"kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:05 crc kubenswrapper[4792]: I0216 21:43:05.998770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.033880 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c00cd5-2613-4930-9091-9061ea9496bf" path="/var/lib/kubelet/pods/74c00cd5-2613-4930-9091-9061ea9496bf/volumes" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.034964 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86214154-257c-46e0-8f95-8a16bd86f9ec" path="/var/lib/kubelet/pods/86214154-257c-46e0-8f95-8a16bd86f9ec/volumes" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.099929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.100057 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6q22\" (UniqueName: \"kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.100365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm5k4\" (UniqueName: \"kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101490 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.101726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.103381 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.103564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.103985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.105253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.108242 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.108312 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.134471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6q22\" (UniqueName: \"kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22\") pod \"controller-manager-69c99cdf5d-ggqjg\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.134941 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm5k4\" (UniqueName: \"kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4\") pod \"route-controller-manager-6f6f948fbb-vbpbb\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.231489 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.244098 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.470612 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:06 crc kubenswrapper[4792]: W0216 21:43:06.482134 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9894d628_71ff_4965_b40f_c553a0111b9e.slice/crio-943467788f16ce33e1e69425f5c3ab4365c5c4862f303e58d284b4185229598e WatchSource:0}: Error finding container 943467788f16ce33e1e69425f5c3ab4365c5c4862f303e58d284b4185229598e: Status 404 returned error can't find the container with id 943467788f16ce33e1e69425f5c3ab4365c5c4862f303e58d284b4185229598e Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.512099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" event={"ID":"9894d628-71ff-4965-b40f-c553a0111b9e","Type":"ContainerStarted","Data":"943467788f16ce33e1e69425f5c3ab4365c5c4862f303e58d284b4185229598e"} Feb 16 21:43:06 crc kubenswrapper[4792]: I0216 21:43:06.522531 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:06 crc kubenswrapper[4792]: W0216 21:43:06.544828 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabbab17d_cf5f_400f_92d5_73d5d24365a0.slice/crio-70aea9e3b8c0653cdeb6dcb29971e5a299f431fa7c0492f74b1cbd226dfe7e39 WatchSource:0}: Error finding container 70aea9e3b8c0653cdeb6dcb29971e5a299f431fa7c0492f74b1cbd226dfe7e39: Status 404 returned error can't find the container with id 70aea9e3b8c0653cdeb6dcb29971e5a299f431fa7c0492f74b1cbd226dfe7e39 Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.522409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" event={"ID":"abbab17d-cf5f-400f-92d5-73d5d24365a0","Type":"ContainerStarted","Data":"c29bdd6cd9a559c8e0619605f67d2844e956027f1d764803af8e0dcd81982025"} Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.522497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" event={"ID":"abbab17d-cf5f-400f-92d5-73d5d24365a0","Type":"ContainerStarted","Data":"70aea9e3b8c0653cdeb6dcb29971e5a299f431fa7c0492f74b1cbd226dfe7e39"} Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.522547 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.524716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" event={"ID":"9894d628-71ff-4965-b40f-c553a0111b9e","Type":"ContainerStarted","Data":"2f8545941a48bf593bb8dc8532aa4c9ced1dcaeeebdcadf4266082d54cef8249"} Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.524943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.532162 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.533343 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.546289 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" podStartSLOduration=3.5462505330000003 podStartE2EDuration="3.546250533s" podCreationTimestamp="2026-02-16 21:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:43:07.541370337 +0000 UTC m=+320.194649228" watchObservedRunningTime="2026-02-16 21:43:07.546250533 +0000 UTC m=+320.199529464" Feb 16 21:43:07 crc kubenswrapper[4792]: I0216 21:43:07.581709 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" podStartSLOduration=3.58166732 podStartE2EDuration="3.58166732s" podCreationTimestamp="2026-02-16 21:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:43:07.564426919 +0000 UTC m=+320.217705810" watchObservedRunningTime="2026-02-16 21:43:07.58166732 +0000 UTC m=+320.234946251" Feb 16 21:43:18 crc kubenswrapper[4792]: I0216 21:43:18.563320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:18 crc kubenswrapper[4792]: E0216 21:43:18.563497 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:43:18 crc kubenswrapper[4792]: E0216 21:43:18.563949 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates podName:28305a45-7e34-4e32-9579-c50ea1d1d4e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:43:50.563930574 +0000 UTC m=+363.217209475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-gjrv9" (UID: "28305a45-7e34-4e32-9579-c50ea1d1d4e5") : secret "prometheus-operator-admission-webhook-tls" not found Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.169698 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.170253 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" podUID="9894d628-71ff-4965-b40f-c553a0111b9e" containerName="controller-manager" containerID="cri-o://2f8545941a48bf593bb8dc8532aa4c9ced1dcaeeebdcadf4266082d54cef8249" gracePeriod=30 Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.192446 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.192808 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" podUID="abbab17d-cf5f-400f-92d5-73d5d24365a0" containerName="route-controller-manager" containerID="cri-o://c29bdd6cd9a559c8e0619605f67d2844e956027f1d764803af8e0dcd81982025" gracePeriod=30 Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.605888 4792 generic.go:334] "Generic (PLEG): container finished" podID="abbab17d-cf5f-400f-92d5-73d5d24365a0" containerID="c29bdd6cd9a559c8e0619605f67d2844e956027f1d764803af8e0dcd81982025" exitCode=0 Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.605969 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" event={"ID":"abbab17d-cf5f-400f-92d5-73d5d24365a0","Type":"ContainerDied","Data":"c29bdd6cd9a559c8e0619605f67d2844e956027f1d764803af8e0dcd81982025"} Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.608532 4792 generic.go:334] "Generic (PLEG): container finished" podID="9894d628-71ff-4965-b40f-c553a0111b9e" containerID="2f8545941a48bf593bb8dc8532aa4c9ced1dcaeeebdcadf4266082d54cef8249" exitCode=0 Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.608589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" event={"ID":"9894d628-71ff-4965-b40f-c553a0111b9e","Type":"ContainerDied","Data":"2f8545941a48bf593bb8dc8532aa4c9ced1dcaeeebdcadf4266082d54cef8249"} Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.737097 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.745782 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919264 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config\") pod \"9894d628-71ff-4965-b40f-c553a0111b9e\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert\") pod \"9894d628-71ff-4965-b40f-c553a0111b9e\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919358 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles\") pod \"9894d628-71ff-4965-b40f-c553a0111b9e\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919394 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config\") pod \"abbab17d-cf5f-400f-92d5-73d5d24365a0\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919432 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca\") pod \"abbab17d-cf5f-400f-92d5-73d5d24365a0\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919451 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm5k4\" (UniqueName: \"kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4\") pod \"abbab17d-cf5f-400f-92d5-73d5d24365a0\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919467 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca\") pod \"9894d628-71ff-4965-b40f-c553a0111b9e\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919521 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert\") pod \"abbab17d-cf5f-400f-92d5-73d5d24365a0\" (UID: \"abbab17d-cf5f-400f-92d5-73d5d24365a0\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.919536 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6q22\" (UniqueName: \"kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22\") pod \"9894d628-71ff-4965-b40f-c553a0111b9e\" (UID: \"9894d628-71ff-4965-b40f-c553a0111b9e\") " Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.920677 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config" (OuterVolumeSpecName: "config") pod "abbab17d-cf5f-400f-92d5-73d5d24365a0" (UID: "abbab17d-cf5f-400f-92d5-73d5d24365a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.920722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "abbab17d-cf5f-400f-92d5-73d5d24365a0" (UID: "abbab17d-cf5f-400f-92d5-73d5d24365a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.921074 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config" (OuterVolumeSpecName: "config") pod "9894d628-71ff-4965-b40f-c553a0111b9e" (UID: "9894d628-71ff-4965-b40f-c553a0111b9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.921120 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9894d628-71ff-4965-b40f-c553a0111b9e" (UID: "9894d628-71ff-4965-b40f-c553a0111b9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.921431 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9894d628-71ff-4965-b40f-c553a0111b9e" (UID: "9894d628-71ff-4965-b40f-c553a0111b9e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.925355 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abbab17d-cf5f-400f-92d5-73d5d24365a0" (UID: "abbab17d-cf5f-400f-92d5-73d5d24365a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.926333 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22" (OuterVolumeSpecName: "kube-api-access-s6q22") pod "9894d628-71ff-4965-b40f-c553a0111b9e" (UID: "9894d628-71ff-4965-b40f-c553a0111b9e"). InnerVolumeSpecName "kube-api-access-s6q22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.927951 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4" (OuterVolumeSpecName: "kube-api-access-dm5k4") pod "abbab17d-cf5f-400f-92d5-73d5d24365a0" (UID: "abbab17d-cf5f-400f-92d5-73d5d24365a0"). InnerVolumeSpecName "kube-api-access-dm5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:43:21 crc kubenswrapper[4792]: I0216 21:43:21.928092 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9894d628-71ff-4965-b40f-c553a0111b9e" (UID: "9894d628-71ff-4965-b40f-c553a0111b9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020869 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbab17d-cf5f-400f-92d5-73d5d24365a0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020902 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6q22\" (UniqueName: \"kubernetes.io/projected/9894d628-71ff-4965-b40f-c553a0111b9e-kube-api-access-s6q22\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020914 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020922 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9894d628-71ff-4965-b40f-c553a0111b9e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020930 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020938 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020948 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbab17d-cf5f-400f-92d5-73d5d24365a0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020957 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm5k4\" (UniqueName: \"kubernetes.io/projected/abbab17d-cf5f-400f-92d5-73d5d24365a0-kube-api-access-dm5k4\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.020965 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9894d628-71ff-4965-b40f-c553a0111b9e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.614589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" event={"ID":"9894d628-71ff-4965-b40f-c553a0111b9e","Type":"ContainerDied","Data":"943467788f16ce33e1e69425f5c3ab4365c5c4862f303e58d284b4185229598e"} Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.614663 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.614681 4792 scope.go:117] "RemoveContainer" containerID="2f8545941a48bf593bb8dc8532aa4c9ced1dcaeeebdcadf4266082d54cef8249" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.616966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" event={"ID":"abbab17d-cf5f-400f-92d5-73d5d24365a0","Type":"ContainerDied","Data":"70aea9e3b8c0653cdeb6dcb29971e5a299f431fa7c0492f74b1cbd226dfe7e39"} Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.617004 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.637388 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.640773 4792 scope.go:117] "RemoveContainer" containerID="c29bdd6cd9a559c8e0619605f67d2844e956027f1d764803af8e0dcd81982025" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.645617 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6f948fbb-vbpbb"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.653372 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.657107 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69c99cdf5d-ggqjg"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.918673 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:43:22 crc kubenswrapper[4792]: E0216 21:43:22.919052 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbab17d-cf5f-400f-92d5-73d5d24365a0" containerName="route-controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.919082 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbab17d-cf5f-400f-92d5-73d5d24365a0" containerName="route-controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: E0216 21:43:22.919112 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9894d628-71ff-4965-b40f-c553a0111b9e" containerName="controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.919126 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9894d628-71ff-4965-b40f-c553a0111b9e" containerName="controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.919299 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbab17d-cf5f-400f-92d5-73d5d24365a0" containerName="route-controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.919332 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9894d628-71ff-4965-b40f-c553a0111b9e" containerName="controller-manager" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.919959 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.922848 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.923280 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.924387 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.924773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.925891 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.926296 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.935122 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.935513 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.936749 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.940402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.940676 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.940845 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.943052 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.943281 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.944816 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.948101 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:43:22 crc kubenswrapper[4792]: I0216 21:43:22.961175 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.036991 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.037059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.037098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8dbx\" (UniqueName: \"kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.037125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.037428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8dbx\" (UniqueName: \"kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138235 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kql5q\" (UniqueName: \"kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.138358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.139354 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.139568 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.139829 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.148775 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.157019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8dbx\" (UniqueName: \"kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx\") pod \"controller-manager-7c788c996c-c6gq4\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.239517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.239571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.239592 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kql5q\" (UniqueName: \"kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.239698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.240443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.240779 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.244105 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.250275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.256383 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kql5q\" (UniqueName: \"kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q\") pod \"route-controller-manager-7b4bc89c55-kzjtw\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.264704 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.518455 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.622815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" event={"ID":"79fa2e58-3e1c-4021-bc1b-93c20da8b080","Type":"ContainerStarted","Data":"00c085f5bf39494546f0283a98402576a9200366af0189c76fcfe977d6cd7dce"} Feb 16 21:43:23 crc kubenswrapper[4792]: I0216 21:43:23.655837 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.037323 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9894d628-71ff-4965-b40f-c553a0111b9e" path="/var/lib/kubelet/pods/9894d628-71ff-4965-b40f-c553a0111b9e/volumes" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.038666 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbab17d-cf5f-400f-92d5-73d5d24365a0" path="/var/lib/kubelet/pods/abbab17d-cf5f-400f-92d5-73d5d24365a0/volumes" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.633477 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" event={"ID":"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b","Type":"ContainerStarted","Data":"968761cd3bae3200d2d76c7046040df941b15f48489433b0ec16cc9e0ee06af3"} Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.634151 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.634175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" event={"ID":"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b","Type":"ContainerStarted","Data":"d2f36b57e13fe2d0c6fecd2b591a543263b23db1fce1a1c9caf69f99637f86af"} Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.635479 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" event={"ID":"79fa2e58-3e1c-4021-bc1b-93c20da8b080","Type":"ContainerStarted","Data":"f2eef1f330949ce5139fc1100c1bc55a0e0b869bae417dd164f166aabe6c3d7b"} Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.635843 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.640146 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.643081 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:43:24 crc kubenswrapper[4792]: I0216 21:43:24.657831 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" podStartSLOduration=3.657803377 podStartE2EDuration="3.657803377s" podCreationTimestamp="2026-02-16 21:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:43:24.654449833 +0000 UTC m=+337.307728764" watchObservedRunningTime="2026-02-16 21:43:24.657803377 +0000 UTC m=+337.311082288" Feb 16 21:43:31 crc kubenswrapper[4792]: I0216 21:43:31.532840 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:43:31 crc kubenswrapper[4792]: I0216 21:43:31.533162 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.394578 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" podStartSLOduration=24.394561479 podStartE2EDuration="24.394561479s" podCreationTimestamp="2026-02-16 21:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:43:24.694166578 +0000 UTC m=+337.347445479" watchObservedRunningTime="2026-02-16 21:43:45.394561479 +0000 UTC m=+358.047840370" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.398106 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-twrqg"] Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.398761 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.410755 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-twrqg"] Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-bound-sa-token\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534109 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b29a3918-f008-4a27-9796-1f2d4bce67b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-certificates\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddm6\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-kube-api-access-kddm6\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534258 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-tls\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534308 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b29a3918-f008-4a27-9796-1f2d4bce67b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.534366 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-trusted-ca\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.561283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddm6\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-kube-api-access-kddm6\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-tls\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635345 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b29a3918-f008-4a27-9796-1f2d4bce67b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-trusted-ca\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635432 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-bound-sa-token\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635462 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b29a3918-f008-4a27-9796-1f2d4bce67b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.635482 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-certificates\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.636561 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-certificates\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.638236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b29a3918-f008-4a27-9796-1f2d4bce67b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.639464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b29a3918-f008-4a27-9796-1f2d4bce67b5-trusted-ca\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.644167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-registry-tls\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.644212 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b29a3918-f008-4a27-9796-1f2d4bce67b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.657008 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-bound-sa-token\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.661041 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddm6\" (UniqueName: \"kubernetes.io/projected/b29a3918-f008-4a27-9796-1f2d4bce67b5-kube-api-access-kddm6\") pod \"image-registry-66df7c8f76-twrqg\" (UID: \"b29a3918-f008-4a27-9796-1f2d4bce67b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:45 crc kubenswrapper[4792]: I0216 21:43:45.717272 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:46 crc kubenswrapper[4792]: I0216 21:43:46.392032 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-twrqg"] Feb 16 21:43:47 crc kubenswrapper[4792]: I0216 21:43:47.368862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" event={"ID":"b29a3918-f008-4a27-9796-1f2d4bce67b5","Type":"ContainerStarted","Data":"88f2c35342ffb2b08f6adf20dbe222909870f305022ca58221d9f11d5bb9043a"} Feb 16 21:43:47 crc kubenswrapper[4792]: I0216 21:43:47.369506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" event={"ID":"b29a3918-f008-4a27-9796-1f2d4bce67b5","Type":"ContainerStarted","Data":"6d6654abcaba9226b6f0196b31f400b6da6030facfcce0b1e50cf742947ea6cd"} Feb 16 21:43:47 crc kubenswrapper[4792]: I0216 21:43:47.369552 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:43:47 crc kubenswrapper[4792]: I0216 21:43:47.386584 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" podStartSLOduration=2.386564377 podStartE2EDuration="2.386564377s" podCreationTimestamp="2026-02-16 21:43:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:43:47.384265643 +0000 UTC m=+360.037544534" watchObservedRunningTime="2026-02-16 21:43:47.386564377 +0000 UTC m=+360.039843268" Feb 16 21:43:50 crc kubenswrapper[4792]: I0216 21:43:50.596982 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:50 crc kubenswrapper[4792]: I0216 21:43:50.605042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/28305a45-7e34-4e32-9579-c50ea1d1d4e5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-gjrv9\" (UID: \"28305a45-7e34-4e32-9579-c50ea1d1d4e5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:50 crc kubenswrapper[4792]: I0216 21:43:50.759077 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:51 crc kubenswrapper[4792]: I0216 21:43:51.233221 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9"] Feb 16 21:43:51 crc kubenswrapper[4792]: W0216 21:43:51.237142 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28305a45_7e34_4e32_9579_c50ea1d1d4e5.slice/crio-4b7ebce80b67ecc4f2dcd0f7876d924a4e3e38791904063000261ea1a2134b28 WatchSource:0}: Error finding container 4b7ebce80b67ecc4f2dcd0f7876d924a4e3e38791904063000261ea1a2134b28: Status 404 returned error can't find the container with id 4b7ebce80b67ecc4f2dcd0f7876d924a4e3e38791904063000261ea1a2134b28 Feb 16 21:43:51 crc kubenswrapper[4792]: I0216 21:43:51.389680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" event={"ID":"28305a45-7e34-4e32-9579-c50ea1d1d4e5","Type":"ContainerStarted","Data":"4b7ebce80b67ecc4f2dcd0f7876d924a4e3e38791904063000261ea1a2134b28"} Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.400912 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" event={"ID":"28305a45-7e34-4e32-9579-c50ea1d1d4e5","Type":"ContainerStarted","Data":"6fce049b445c9cfddb2aab639e3591ef1cf302eda12162a2957f00ef3bc52b13"} Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.401390 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.407997 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.422831 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-gjrv9" podStartSLOduration=66.120985027 podStartE2EDuration="1m7.422814306s" podCreationTimestamp="2026-02-16 21:42:46 +0000 UTC" firstStartedPulling="2026-02-16 21:43:51.239691379 +0000 UTC m=+363.892970280" lastFinishedPulling="2026-02-16 21:43:52.541520668 +0000 UTC m=+365.194799559" observedRunningTime="2026-02-16 21:43:53.419656288 +0000 UTC m=+366.072935189" watchObservedRunningTime="2026-02-16 21:43:53.422814306 +0000 UTC m=+366.076093197" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.921775 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-5xvvr"] Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.922960 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.926582 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.926801 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.927165 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.934050 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-zc5gz" Feb 16 21:43:53 crc kubenswrapper[4792]: I0216 21:43:53.942722 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-5xvvr"] Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.057230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.057285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntldm\" (UniqueName: \"kubernetes.io/projected/3553f8cd-db67-41d5-bf32-ddd6467f45fa-kube-api-access-ntldm\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.057354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.057453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3553f8cd-db67-41d5-bf32-ddd6467f45fa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.158585 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3553f8cd-db67-41d5-bf32-ddd6467f45fa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.159724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3553f8cd-db67-41d5-bf32-ddd6467f45fa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.159887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.160180 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntldm\" (UniqueName: \"kubernetes.io/projected/3553f8cd-db67-41d5-bf32-ddd6467f45fa-kube-api-access-ntldm\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.160520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.169116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.169883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3553f8cd-db67-41d5-bf32-ddd6467f45fa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.185530 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntldm\" (UniqueName: \"kubernetes.io/projected/3553f8cd-db67-41d5-bf32-ddd6467f45fa-kube-api-access-ntldm\") pod \"prometheus-operator-db54df47d-5xvvr\" (UID: \"3553f8cd-db67-41d5-bf32-ddd6467f45fa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.247374 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" Feb 16 21:43:54 crc kubenswrapper[4792]: I0216 21:43:54.669560 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-5xvvr"] Feb 16 21:43:54 crc kubenswrapper[4792]: W0216 21:43:54.673755 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3553f8cd_db67_41d5_bf32_ddd6467f45fa.slice/crio-3c5978ec76c970ae62e5a19dc5ff52493fae4641922369fe429b08825d4511cf WatchSource:0}: Error finding container 3c5978ec76c970ae62e5a19dc5ff52493fae4641922369fe429b08825d4511cf: Status 404 returned error can't find the container with id 3c5978ec76c970ae62e5a19dc5ff52493fae4641922369fe429b08825d4511cf Feb 16 21:43:55 crc kubenswrapper[4792]: I0216 21:43:55.414774 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" event={"ID":"3553f8cd-db67-41d5-bf32-ddd6467f45fa","Type":"ContainerStarted","Data":"3c5978ec76c970ae62e5a19dc5ff52493fae4641922369fe429b08825d4511cf"} Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.402784 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmzts"] Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.404974 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.406911 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.418733 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmzts"] Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.431318 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" event={"ID":"3553f8cd-db67-41d5-bf32-ddd6467f45fa","Type":"ContainerStarted","Data":"4a6a4784cc8556b88211dd215a352fe357530613a66766956b488cfcef8ffcd8"} Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.431375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" event={"ID":"3553f8cd-db67-41d5-bf32-ddd6467f45fa","Type":"ContainerStarted","Data":"87d7a389ce78230fc586f7c98a0a70ce160f6bcd598ab39535e48d43d0e6dbb9"} Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.455675 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-5xvvr" podStartSLOduration=2.685547094 podStartE2EDuration="4.455651596s" podCreationTimestamp="2026-02-16 21:43:53 +0000 UTC" firstStartedPulling="2026-02-16 21:43:54.676557308 +0000 UTC m=+367.329836209" lastFinishedPulling="2026-02-16 21:43:56.44666182 +0000 UTC m=+369.099940711" observedRunningTime="2026-02-16 21:43:57.451358446 +0000 UTC m=+370.104637337" watchObservedRunningTime="2026-02-16 21:43:57.455651596 +0000 UTC m=+370.108930487" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.502921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95l8q\" (UniqueName: \"kubernetes.io/projected/7cb484ab-fa97-4c10-a78e-20a51ec6618b-kube-api-access-95l8q\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.503108 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-utilities\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.503194 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-catalog-content\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.596537 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9xfg"] Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.597479 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.600399 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.605688 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95l8q\" (UniqueName: \"kubernetes.io/projected/7cb484ab-fa97-4c10-a78e-20a51ec6618b-kube-api-access-95l8q\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.605751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-utilities\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.605801 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-catalog-content\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.606331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-utilities\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.606345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb484ab-fa97-4c10-a78e-20a51ec6618b-catalog-content\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.607473 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9xfg"] Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.639368 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95l8q\" (UniqueName: \"kubernetes.io/projected/7cb484ab-fa97-4c10-a78e-20a51ec6618b-kube-api-access-95l8q\") pod \"certified-operators-fmzts\" (UID: \"7cb484ab-fa97-4c10-a78e-20a51ec6618b\") " pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.707172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrz4\" (UniqueName: \"kubernetes.io/projected/da72596c-78d5-40d7-99b1-282bb5bdeb6e-kube-api-access-wrrz4\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.707207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-utilities\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.707228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-catalog-content\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.725506 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.808343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrz4\" (UniqueName: \"kubernetes.io/projected/da72596c-78d5-40d7-99b1-282bb5bdeb6e-kube-api-access-wrrz4\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.808383 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-utilities\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.808408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-catalog-content\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.808821 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-catalog-content\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.809270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da72596c-78d5-40d7-99b1-282bb5bdeb6e-utilities\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.830203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrz4\" (UniqueName: \"kubernetes.io/projected/da72596c-78d5-40d7-99b1-282bb5bdeb6e-kube-api-access-wrrz4\") pod \"redhat-operators-g9xfg\" (UID: \"da72596c-78d5-40d7-99b1-282bb5bdeb6e\") " pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:57 crc kubenswrapper[4792]: I0216 21:43:57.915542 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.145495 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmzts"] Feb 16 21:43:58 crc kubenswrapper[4792]: W0216 21:43:58.148648 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb484ab_fa97_4c10_a78e_20a51ec6618b.slice/crio-bd7a1adb2903d96515dfcfc126f430c3321be66b6ab8679c504e1b25447cc2ed WatchSource:0}: Error finding container bd7a1adb2903d96515dfcfc126f430c3321be66b6ab8679c504e1b25447cc2ed: Status 404 returned error can't find the container with id bd7a1adb2903d96515dfcfc126f430c3321be66b6ab8679c504e1b25447cc2ed Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.300249 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9xfg"] Feb 16 21:43:58 crc kubenswrapper[4792]: W0216 21:43:58.371577 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda72596c_78d5_40d7_99b1_282bb5bdeb6e.slice/crio-7710d775b7c956a53951945686bbebab8997c6aa9d59979c4387b2f77d3aaef6 WatchSource:0}: Error finding container 7710d775b7c956a53951945686bbebab8997c6aa9d59979c4387b2f77d3aaef6: Status 404 returned error can't find the container with id 7710d775b7c956a53951945686bbebab8997c6aa9d59979c4387b2f77d3aaef6 Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.437488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9xfg" event={"ID":"da72596c-78d5-40d7-99b1-282bb5bdeb6e","Type":"ContainerStarted","Data":"7710d775b7c956a53951945686bbebab8997c6aa9d59979c4387b2f77d3aaef6"} Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.438738 4792 generic.go:334] "Generic (PLEG): container finished" podID="7cb484ab-fa97-4c10-a78e-20a51ec6618b" containerID="cde44b3af1b112d8bf7d588387cd3325202b7783b6357cb16a15b24522976cf1" exitCode=0 Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.438824 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmzts" event={"ID":"7cb484ab-fa97-4c10-a78e-20a51ec6618b","Type":"ContainerDied","Data":"cde44b3af1b112d8bf7d588387cd3325202b7783b6357cb16a15b24522976cf1"} Feb 16 21:43:58 crc kubenswrapper[4792]: I0216 21:43:58.438859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmzts" event={"ID":"7cb484ab-fa97-4c10-a78e-20a51ec6618b","Type":"ContainerStarted","Data":"bd7a1adb2903d96515dfcfc126f430c3321be66b6ab8679c504e1b25447cc2ed"} Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.192584 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-9kjkj"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.193729 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.196994 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.197098 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4q8jk" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.197285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.224477 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4q8b7"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.225530 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.229335 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.231943 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4q8b7"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.237270 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.238563 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.241806 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.245430 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vcmqs" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.245626 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.248740 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.278038 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.279413 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.281780 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-92nkg" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.281924 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.282184 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.291710 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.303092 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6"] Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-tls\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-root\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325292 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ec5b344-1e65-4c9a-895c-f08dd626d231-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325312 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-catalog-content\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-metrics-client-ca\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325381 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-sys\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325403 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-utilities\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325450 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325466 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-textfile\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kh54\" (UniqueName: \"kubernetes.io/projected/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-kube-api-access-9kh54\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-wtmp\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2r5j\" (UniqueName: \"kubernetes.io/projected/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-kube-api-access-w2r5j\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.325558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cb2\" (UniqueName: \"kubernetes.io/projected/4ec5b344-1e65-4c9a-895c-f08dd626d231-kube-api-access-h2cb2\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2r5j\" (UniqueName: \"kubernetes.io/projected/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-kube-api-access-w2r5j\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427158 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cb2\" (UniqueName: \"kubernetes.io/projected/4ec5b344-1e65-4c9a-895c-f08dd626d231-kube-api-access-h2cb2\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-tls\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-root\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96lkd\" (UniqueName: \"kubernetes.io/projected/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-api-access-96lkd\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427294 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ec5b344-1e65-4c9a-895c-f08dd626d231-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-catalog-content\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427341 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-metrics-client-ca\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-sys\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-utilities\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427451 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427492 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427619 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-textfile\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427647 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427668 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kh54\" (UniqueName: \"kubernetes.io/projected/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-kube-api-access-9kh54\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-wtmp\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.427939 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-wtmp\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.429418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-root\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.429923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-utilities\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.430290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-catalog-content\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.430482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-sys\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: E0216 21:43:59.430726 4792 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 16 21:43:59 crc kubenswrapper[4792]: E0216 21:43:59.430806 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls podName:4ec5b344-1e65-4c9a-895c-f08dd626d231 nodeName:}" failed. No retries permitted until 2026-02-16 21:43:59.930766887 +0000 UTC m=+372.584045778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-hqfqw" (UID: "4ec5b344-1e65-4c9a-895c-f08dd626d231") : secret "openshift-state-metrics-tls" not found Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.430875 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-textfile\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.430922 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-metrics-client-ca\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.432334 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ec5b344-1e65-4c9a-895c-f08dd626d231-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.434912 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.434962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.435524 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-node-exporter-tls\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.446468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cb2\" (UniqueName: \"kubernetes.io/projected/4ec5b344-1e65-4c9a-895c-f08dd626d231-kube-api-access-h2cb2\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.447101 4792 generic.go:334] "Generic (PLEG): container finished" podID="da72596c-78d5-40d7-99b1-282bb5bdeb6e" containerID="40fb27b54baa3dc2efd9c2d7aceadd0c07498cc3a6e7e9a4807df3f979852dbb" exitCode=0 Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.447161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9xfg" event={"ID":"da72596c-78d5-40d7-99b1-282bb5bdeb6e","Type":"ContainerDied","Data":"40fb27b54baa3dc2efd9c2d7aceadd0c07498cc3a6e7e9a4807df3f979852dbb"} Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.449951 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2r5j\" (UniqueName: \"kubernetes.io/projected/ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1-kube-api-access-w2r5j\") pod \"community-operators-4q8b7\" (UID: \"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1\") " pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.455434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kh54\" (UniqueName: \"kubernetes.io/projected/9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c-kube-api-access-9kh54\") pod \"node-exporter-9kjkj\" (UID: \"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c\") " pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.524590 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9kjkj" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.529353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.529409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.529431 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.529500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.530079 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.530175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96lkd\" (UniqueName: \"kubernetes.io/projected/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-api-access-96lkd\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.530225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.532217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.533014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.533159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.533401 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.546464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96lkd\" (UniqueName: \"kubernetes.io/projected/6923a9c3-34fb-43fb-a93b-19bef32e0b6f-kube-api-access-96lkd\") pod \"kube-state-metrics-777cb5bd5d-p6nt6\" (UID: \"6923a9c3-34fb-43fb-a93b-19bef32e0b6f\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.546865 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.603730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.937039 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.943485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ec5b344-1e65-4c9a-895c-f08dd626d231-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hqfqw\" (UID: \"4ec5b344-1e65-4c9a-895c-f08dd626d231\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:43:59 crc kubenswrapper[4792]: I0216 21:43:59.974556 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4q8b7"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.063244 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6"] Feb 16 21:44:00 crc kubenswrapper[4792]: W0216 21:44:00.070645 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6923a9c3_34fb_43fb_a93b_19bef32e0b6f.slice/crio-2b9e84e72d5a6c38aacd18f99cb5f981a6bd58e6b69b7db8f694010fa1f573c3 WatchSource:0}: Error finding container 2b9e84e72d5a6c38aacd18f99cb5f981a6bd58e6b69b7db8f694010fa1f573c3: Status 404 returned error can't find the container with id 2b9e84e72d5a6c38aacd18f99cb5f981a6bd58e6b69b7db8f694010fa1f573c3 Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.164655 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.200341 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pblwf"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.201579 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.203423 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.226705 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblwf"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.343466 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-catalog-content\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.343536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-utilities\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.343637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdc5w\" (UniqueName: \"kubernetes.io/projected/d7baac81-f46f-4e76-9333-95dcdc915c42-kube-api-access-sdc5w\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.362365 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.365385 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.367963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.368256 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.368366 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.368477 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.368634 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.368739 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.369288 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2lpqj" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.369305 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.374503 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.414099 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.447930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-catalog-content\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448004 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-out\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448045 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-utilities\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448066 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdc5w\" (UniqueName: \"kubernetes.io/projected/d7baac81-f46f-4e76-9333-95dcdc915c42-kube-api-access-sdc5w\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864cg\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-kube-api-access-864cg\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448253 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-web-config\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.448288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.449116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-catalog-content\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.449555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7baac81-f46f-4e76-9333-95dcdc915c42-utilities\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.462244 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" event={"ID":"6923a9c3-34fb-43fb-a93b-19bef32e0b6f","Type":"ContainerStarted","Data":"2b9e84e72d5a6c38aacd18f99cb5f981a6bd58e6b69b7db8f694010fa1f573c3"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.466047 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9xfg" event={"ID":"da72596c-78d5-40d7-99b1-282bb5bdeb6e","Type":"ContainerStarted","Data":"ec466004b6f61bf4974f04e115d448fef7fa80f19892d6621dbf1d32aeb0faad"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.468627 4792 generic.go:334] "Generic (PLEG): container finished" podID="7cb484ab-fa97-4c10-a78e-20a51ec6618b" containerID="7ff8635be9ec8ade92d8d5f17b514904d3a64e70c478fb96e5f0588084baf87c" exitCode=0 Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.468686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmzts" event={"ID":"7cb484ab-fa97-4c10-a78e-20a51ec6618b","Type":"ContainerDied","Data":"7ff8635be9ec8ade92d8d5f17b514904d3a64e70c478fb96e5f0588084baf87c"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.473062 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1" containerID="fb89e513d1e52a6c9ab95d4a9d6fccf30255e3eacfb216ae0167a46903cdde49" exitCode=0 Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.473068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdc5w\" (UniqueName: \"kubernetes.io/projected/d7baac81-f46f-4e76-9333-95dcdc915c42-kube-api-access-sdc5w\") pod \"redhat-marketplace-pblwf\" (UID: \"d7baac81-f46f-4e76-9333-95dcdc915c42\") " pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.473885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4q8b7" event={"ID":"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1","Type":"ContainerDied","Data":"fb89e513d1e52a6c9ab95d4a9d6fccf30255e3eacfb216ae0167a46903cdde49"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.473916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4q8b7" event={"ID":"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1","Type":"ContainerStarted","Data":"43ff32da420142256e3767c7122ab23d5161e39d294abd08bcfc4697a770491a"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.477965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9kjkj" event={"ID":"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c","Type":"ContainerStarted","Data":"e315009725652931eca56212d52484dafbee2d329321b624b1ded3369cc93044"} Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.536752 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549541 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549614 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-web-config\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549660 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-out\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549826 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549857 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549893 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549950 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.549977 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864cg\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-kube-api-access-864cg\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.552733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.553416 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.554211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-out\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.554350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.555755 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.556208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-tls-assets\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.556738 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.557899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-config-volume\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.558862 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.561230 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-web-config\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.570236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864cg\" (UniqueName: \"kubernetes.io/projected/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-kube-api-access-864cg\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.571916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bd2c43d0-5333-4f78-96d3-9ed86ecfd602-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"bd2c43d0-5333-4f78-96d3-9ed86ecfd602\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.690813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.709683 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw"] Feb 16 21:44:00 crc kubenswrapper[4792]: I0216 21:44:00.968015 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblwf"] Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.175391 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:44:01 crc kubenswrapper[4792]: W0216 21:44:01.195162 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2c43d0_5333_4f78_96d3_9ed86ecfd602.slice/crio-82e96bc3d78eeac05fdf7ba3484254b0d4fcb3b74d42cc2b3db78c88430ebb25 WatchSource:0}: Error finding container 82e96bc3d78eeac05fdf7ba3484254b0d4fcb3b74d42cc2b3db78c88430ebb25: Status 404 returned error can't find the container with id 82e96bc3d78eeac05fdf7ba3484254b0d4fcb3b74d42cc2b3db78c88430ebb25 Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.347492 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-77f559c558-dggk9"] Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.349531 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.351782 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.352539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.352761 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.352795 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-434jvne9fv79v" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.352951 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.352960 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-bktxj" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.353112 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.361273 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-77f559c558-dggk9"] Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-grpc-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq6t\" (UniqueName: \"kubernetes.io/projected/90fa52da-61b8-4afc-9e6f-52112bb14dea-kube-api-access-bdq6t\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462695 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462758 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.462787 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90fa52da-61b8-4afc-9e6f-52112bb14dea-metrics-client-ca\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.485141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"82e96bc3d78eeac05fdf7ba3484254b0d4fcb3b74d42cc2b3db78c88430ebb25"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.486816 4792 generic.go:334] "Generic (PLEG): container finished" podID="da72596c-78d5-40d7-99b1-282bb5bdeb6e" containerID="ec466004b6f61bf4974f04e115d448fef7fa80f19892d6621dbf1d32aeb0faad" exitCode=0 Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.486857 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9xfg" event={"ID":"da72596c-78d5-40d7-99b1-282bb5bdeb6e","Type":"ContainerDied","Data":"ec466004b6f61bf4974f04e115d448fef7fa80f19892d6621dbf1d32aeb0faad"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.489122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" event={"ID":"4ec5b344-1e65-4c9a-895c-f08dd626d231","Type":"ContainerStarted","Data":"fc9ebd8fcdfb7945039055995f81df99f914bae4f8d122a7c6abf2829c4434ac"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.489152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" event={"ID":"4ec5b344-1e65-4c9a-895c-f08dd626d231","Type":"ContainerStarted","Data":"9045daa1a69648160a418f37526ca2574249855342309bb1cf904261005dc58c"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.489163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" event={"ID":"4ec5b344-1e65-4c9a-895c-f08dd626d231","Type":"ContainerStarted","Data":"21fd3f682f6a867374334ac60c60dace874a07d1e3c9618a8140a4af0b68b704"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.490782 4792 generic.go:334] "Generic (PLEG): container finished" podID="d7baac81-f46f-4e76-9333-95dcdc915c42" containerID="0a429a79c06b916dfb865f69550a469f11df9b69136edbf95bd1f1ea91dfdd17" exitCode=0 Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.490834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblwf" event={"ID":"d7baac81-f46f-4e76-9333-95dcdc915c42","Type":"ContainerDied","Data":"0a429a79c06b916dfb865f69550a469f11df9b69136edbf95bd1f1ea91dfdd17"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.490854 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblwf" event={"ID":"d7baac81-f46f-4e76-9333-95dcdc915c42","Type":"ContainerStarted","Data":"76f94cfab6b980fec2b93dffe2c4546c3c3f07a063b6f570b9634ba16da557c8"} Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.532139 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.532206 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-grpc-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdq6t\" (UniqueName: \"kubernetes.io/projected/90fa52da-61b8-4afc-9e6f-52112bb14dea-kube-api-access-bdq6t\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.567574 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90fa52da-61b8-4afc-9e6f-52112bb14dea-metrics-client-ca\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.580112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-grpc-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.581169 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/90fa52da-61b8-4afc-9e6f-52112bb14dea-metrics-client-ca\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.583727 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.584945 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.586809 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-tls\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.586855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.599481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/90fa52da-61b8-4afc-9e6f-52112bb14dea-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.604348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdq6t\" (UniqueName: \"kubernetes.io/projected/90fa52da-61b8-4afc-9e6f-52112bb14dea-kube-api-access-bdq6t\") pod \"thanos-querier-77f559c558-dggk9\" (UID: \"90fa52da-61b8-4afc-9e6f-52112bb14dea\") " pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:01 crc kubenswrapper[4792]: I0216 21:44:01.722572 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:02 crc kubenswrapper[4792]: I0216 21:44:02.514870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmzts" event={"ID":"7cb484ab-fa97-4c10-a78e-20a51ec6618b","Type":"ContainerStarted","Data":"b6ad8be1a349a657f2c74238e8852fd6c46fd06dd697715b1637f355a966dfbd"} Feb 16 21:44:02 crc kubenswrapper[4792]: I0216 21:44:02.518394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4q8b7" event={"ID":"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1","Type":"ContainerStarted","Data":"8c9f40027c97700f263f341b8b525b8afaba6fdd6955b23beaa97944f9e0df97"} Feb 16 21:44:02 crc kubenswrapper[4792]: I0216 21:44:02.520491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9kjkj" event={"ID":"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c","Type":"ContainerStarted","Data":"87f2e1c2539ee9f3123382b870c6e746b8e827de5106961bc90fe6e0c32bc9c7"} Feb 16 21:44:02 crc kubenswrapper[4792]: I0216 21:44:02.698644 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-77f559c558-dggk9"] Feb 16 21:44:02 crc kubenswrapper[4792]: W0216 21:44:02.903804 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90fa52da_61b8_4afc_9e6f_52112bb14dea.slice/crio-579d3b13315cb249c791db840336ffb42b325a3a4920c7d1d08d532c144fdb90 WatchSource:0}: Error finding container 579d3b13315cb249c791db840336ffb42b325a3a4920c7d1d08d532c144fdb90: Status 404 returned error can't find the container with id 579d3b13315cb249c791db840336ffb42b325a3a4920c7d1d08d532c144fdb90 Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.528609 4792 generic.go:334] "Generic (PLEG): container finished" podID="d7baac81-f46f-4e76-9333-95dcdc915c42" containerID="14e3f4e6b5de643e4a13e74a57bc16d342f7d08ffc10a036ac99f293f96c397a" exitCode=0 Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.528921 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblwf" event={"ID":"d7baac81-f46f-4e76-9333-95dcdc915c42","Type":"ContainerDied","Data":"14e3f4e6b5de643e4a13e74a57bc16d342f7d08ffc10a036ac99f293f96c397a"} Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.534615 4792 generic.go:334] "Generic (PLEG): container finished" podID="9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c" containerID="87f2e1c2539ee9f3123382b870c6e746b8e827de5106961bc90fe6e0c32bc9c7" exitCode=0 Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.534690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9kjkj" event={"ID":"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c","Type":"ContainerDied","Data":"87f2e1c2539ee9f3123382b870c6e746b8e827de5106961bc90fe6e0c32bc9c7"} Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.538990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" event={"ID":"6923a9c3-34fb-43fb-a93b-19bef32e0b6f","Type":"ContainerStarted","Data":"0517e7c59e8e8755ec7b960d869e77b59ac787da0dfa508d0d8694984464c952"} Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.540313 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"579d3b13315cb249c791db840336ffb42b325a3a4920c7d1d08d532c144fdb90"} Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.543928 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1" containerID="8c9f40027c97700f263f341b8b525b8afaba6fdd6955b23beaa97944f9e0df97" exitCode=0 Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.544952 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4q8b7" event={"ID":"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1","Type":"ContainerDied","Data":"8c9f40027c97700f263f341b8b525b8afaba6fdd6955b23beaa97944f9e0df97"} Feb 16 21:44:03 crc kubenswrapper[4792]: I0216 21:44:03.601240 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmzts" podStartSLOduration=2.785003147 podStartE2EDuration="6.601218383s" podCreationTimestamp="2026-02-16 21:43:57 +0000 UTC" firstStartedPulling="2026-02-16 21:43:58.439990347 +0000 UTC m=+371.093269238" lastFinishedPulling="2026-02-16 21:44:02.256205583 +0000 UTC m=+374.909484474" observedRunningTime="2026-02-16 21:44:03.596892193 +0000 UTC m=+376.250171094" watchObservedRunningTime="2026-02-16 21:44:03.601218383 +0000 UTC m=+376.254497274" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.121713 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.124302 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.134699 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202278 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8ff\" (UniqueName: \"kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202418 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.202455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304281 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8ff\" (UniqueName: \"kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304356 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.304409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.306868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.307497 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.307547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.308926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.312398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.315330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.327491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8ff\" (UniqueName: \"kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff\") pod \"console-7575d9dcf4-vv2fk\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.440109 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.464083 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6bd8fbb5df-dkthz"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.464822 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.468950 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.469087 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.469087 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.469334 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-sr4rb" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.469801 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-36d87jrfpkroh" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.472327 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.476038 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6bd8fbb5df-dkthz"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.506751 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc77\" (UniqueName: \"kubernetes.io/projected/65f006d8-41ba-4902-92d1-866f080ef153-kube-api-access-jvc77\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.506796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-metrics-server-audit-profiles\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.506819 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-client-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.506839 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-client-certs\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.506943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.507079 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-server-tls\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.507147 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65f006d8-41ba-4902-92d1-866f080ef153-audit-log\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.551915 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9kjkj" event={"ID":"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c","Type":"ContainerStarted","Data":"1b09ad57c34d2518c5f824e16a0824c873291779131cd2ab54016610acc64df5"} Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.554311 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" event={"ID":"6923a9c3-34fb-43fb-a93b-19bef32e0b6f","Type":"ContainerStarted","Data":"84e3fa531ec438cf3c928e1d0fc05c65278762b7ef5cdb898c091d2f31b65fd1"} Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608642 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvc77\" (UniqueName: \"kubernetes.io/projected/65f006d8-41ba-4902-92d1-866f080ef153-kube-api-access-jvc77\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-metrics-server-audit-profiles\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608756 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-client-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608802 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-client-certs\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608873 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-server-tls\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.608985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65f006d8-41ba-4902-92d1-866f080ef153-audit-log\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.609723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/65f006d8-41ba-4902-92d1-866f080ef153-audit-log\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.609908 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.610641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/65f006d8-41ba-4902-92d1-866f080ef153-metrics-server-audit-profiles\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.613434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-client-certs\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.614347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-secret-metrics-server-tls\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.618626 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f006d8-41ba-4902-92d1-866f080ef153-client-ca-bundle\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.628375 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvc77\" (UniqueName: \"kubernetes.io/projected/65f006d8-41ba-4902-92d1-866f080ef153-kube-api-access-jvc77\") pod \"metrics-server-6bd8fbb5df-dkthz\" (UID: \"65f006d8-41ba-4902-92d1-866f080ef153\") " pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.709676 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.717467 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" podUID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" containerName="controller-manager" containerID="cri-o://968761cd3bae3200d2d76c7046040df941b15f48489433b0ec16cc9e0ee06af3" gracePeriod=30 Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.733925 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.734179 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" podUID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" containerName="route-controller-manager" containerID="cri-o://f2eef1f330949ce5139fc1100c1bc55a0e0b869bae417dd164f166aabe6c3d7b" gracePeriod=30 Feb 16 21:44:04 crc kubenswrapper[4792]: I0216 21:44:04.797155 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.050412 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw"] Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.052977 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.053681 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw"] Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.056239 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.056539 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.117224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/79015bb6-7792-4420-80a3-bfcc7da42a71-monitoring-plugin-cert\") pod \"monitoring-plugin-5fc6555665-ccwpw\" (UID: \"79015bb6-7792-4420-80a3-bfcc7da42a71\") " pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.218282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/79015bb6-7792-4420-80a3-bfcc7da42a71-monitoring-plugin-cert\") pod \"monitoring-plugin-5fc6555665-ccwpw\" (UID: \"79015bb6-7792-4420-80a3-bfcc7da42a71\") " pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.224575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/79015bb6-7792-4420-80a3-bfcc7da42a71-monitoring-plugin-cert\") pod \"monitoring-plugin-5fc6555665-ccwpw\" (UID: \"79015bb6-7792-4420-80a3-bfcc7da42a71\") " pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.372931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.566437 4792 generic.go:334] "Generic (PLEG): container finished" podID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" containerID="968761cd3bae3200d2d76c7046040df941b15f48489433b0ec16cc9e0ee06af3" exitCode=0 Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.566532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" event={"ID":"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b","Type":"ContainerDied","Data":"968761cd3bae3200d2d76c7046040df941b15f48489433b0ec16cc9e0ee06af3"} Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.577447 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.577547 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9kjkj" event={"ID":"9768fdd2-7a08-4d5b-a860-b16c3cbd5d8c","Type":"ContainerStarted","Data":"d07c07ecb7257c4d2ae57fe64c92695509c908febb6d3d59ae5bc5d9fd1d6c64"} Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.579531 4792 generic.go:334] "Generic (PLEG): container finished" podID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" containerID="f2eef1f330949ce5139fc1100c1bc55a0e0b869bae417dd164f166aabe6c3d7b" exitCode=0 Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.579574 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" event={"ID":"79fa2e58-3e1c-4021-bc1b-93c20da8b080","Type":"ContainerDied","Data":"f2eef1f330949ce5139fc1100c1bc55a0e0b869bae417dd164f166aabe6c3d7b"} Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.580056 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624318 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca\") pod \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624408 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles\") pod \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624435 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert\") pod \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config\") pod \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624520 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca\") pod \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624549 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert\") pod \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kql5q\" (UniqueName: \"kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q\") pod \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624633 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config\") pod \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\" (UID: \"79fa2e58-3e1c-4021-bc1b-93c20da8b080\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.624671 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8dbx\" (UniqueName: \"kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx\") pod \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\" (UID: \"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b\") " Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.628405 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" (UID: "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.629247 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config" (OuterVolumeSpecName: "config") pod "79fa2e58-3e1c-4021-bc1b-93c20da8b080" (UID: "79fa2e58-3e1c-4021-bc1b-93c20da8b080"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.629340 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca" (OuterVolumeSpecName: "client-ca") pod "79fa2e58-3e1c-4021-bc1b-93c20da8b080" (UID: "79fa2e58-3e1c-4021-bc1b-93c20da8b080"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.631553 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" (UID: "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.628216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config" (OuterVolumeSpecName: "config") pod "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" (UID: "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.639779 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx" (OuterVolumeSpecName: "kube-api-access-j8dbx") pod "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" (UID: "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b"). InnerVolumeSpecName "kube-api-access-j8dbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.640552 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79fa2e58-3e1c-4021-bc1b-93c20da8b080" (UID: "79fa2e58-3e1c-4021-bc1b-93c20da8b080"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.642462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q" (OuterVolumeSpecName: "kube-api-access-kql5q") pod "79fa2e58-3e1c-4021-bc1b-93c20da8b080" (UID: "79fa2e58-3e1c-4021-bc1b-93c20da8b080"). InnerVolumeSpecName "kube-api-access-kql5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.643650 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" (UID: "97c8c4ba-9fe6-4dcf-ad81-676030c75b2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.666746 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-9kjkj" podStartSLOduration=3.992119996 podStartE2EDuration="6.666721988s" podCreationTimestamp="2026-02-16 21:43:59 +0000 UTC" firstStartedPulling="2026-02-16 21:43:59.547797441 +0000 UTC m=+372.201076332" lastFinishedPulling="2026-02-16 21:44:02.222399433 +0000 UTC m=+374.875678324" observedRunningTime="2026-02-16 21:44:05.659052355 +0000 UTC m=+378.312331246" watchObservedRunningTime="2026-02-16 21:44:05.666721988 +0000 UTC m=+378.320000889" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.722490 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:44:05 crc kubenswrapper[4792]: E0216 21:44:05.722752 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" containerName="route-controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.722765 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" containerName="route-controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: E0216 21:44:05.722786 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" containerName="controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.722792 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" containerName="controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.722892 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" containerName="controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.722904 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" containerName="route-controller-manager" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725009 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-twrqg" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725231 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725651 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725663 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8dbx\" (UniqueName: \"kubernetes.io/projected/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-kube-api-access-j8dbx\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725672 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725681 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725689 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725698 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.725706 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79fa2e58-3e1c-4021-bc1b-93c20da8b080-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.726408 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79fa2e58-3e1c-4021-bc1b-93c20da8b080-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.726418 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kql5q\" (UniqueName: \"kubernetes.io/projected/79fa2e58-3e1c-4021-bc1b-93c20da8b080-kube-api-access-kql5q\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728119 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728409 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728440 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728538 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728617 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cscv94dmr09ua" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.728707 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.729092 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.729392 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.729491 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.729801 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.737402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.738479 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-27m7z" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.740337 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.744779 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.813643 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6bd8fbb5df-dkthz"] Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.834940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.834986 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv7sj\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-kube-api-access-zv7sj\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-config-out\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835100 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835141 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835188 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835208 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835275 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835311 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-web-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.835400 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.839968 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.885415 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:44:05 crc kubenswrapper[4792]: W0216 21:44:05.889212 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod152960a0_1edd_4b0a_912b_c577cf58942c.slice/crio-4ab033af3fe0660e6819d1bf90b347052127fb85f6d29aa51f1060699abf3bcd WatchSource:0}: Error finding container 4ab033af3fe0660e6819d1bf90b347052127fb85f6d29aa51f1060699abf3bcd: Status 404 returned error can't find the container with id 4ab033af3fe0660e6819d1bf90b347052127fb85f6d29aa51f1060699abf3bcd Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.936638 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.936896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-config-out\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.936923 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.936945 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.936978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937030 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937062 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-web-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937165 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937181 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv7sj\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-kube-api-access-zv7sj\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.937267 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.938490 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.939002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.939012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.942211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.946231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.949432 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.950515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e26624cb-2d38-40b9-9750-03225048edc4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.952414 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.953420 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.953986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.955045 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.957255 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e26624cb-2d38-40b9-9750-03225048edc4-config-out\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.957361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.959026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.959436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-web-config\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.959502 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.962540 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26624cb-2d38-40b9-9750-03225048edc4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:05 crc kubenswrapper[4792]: I0216 21:44:05.970668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv7sj\" (UniqueName: \"kubernetes.io/projected/e26624cb-2d38-40b9-9750-03225048edc4-kube-api-access-zv7sj\") pod \"prometheus-k8s-0\" (UID: \"e26624cb-2d38-40b9-9750-03225048edc4\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.072370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.113137 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.350916 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.351813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.354715 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-988d4b47d-rgq9w"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.355313 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.365356 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-988d4b47d-rgq9w"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.369520 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443227 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-serving-cert\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-config\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acbcf4b-af7b-416c-9c35-df103e320f31-serving-cert\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxsf\" (UniqueName: \"kubernetes.io/projected/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-kube-api-access-zbxsf\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443742 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-config\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443786 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-client-ca\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.443969 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgzb\" (UniqueName: \"kubernetes.io/projected/9acbcf4b-af7b-416c-9c35-df103e320f31-kube-api-access-vxgzb\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.444040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-client-ca\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.444070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-proxy-ca-bundles\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.509850 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-client-ca\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-proxy-ca-bundles\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-serving-cert\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-config\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acbcf4b-af7b-416c-9c35-df103e320f31-serving-cert\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbxsf\" (UniqueName: \"kubernetes.io/projected/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-kube-api-access-zbxsf\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.545963 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-config\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.546017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-client-ca\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.546054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxgzb\" (UniqueName: \"kubernetes.io/projected/9acbcf4b-af7b-416c-9c35-df103e320f31-kube-api-access-vxgzb\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.547020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-client-ca\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.547322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-proxy-ca-bundles\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.547396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acbcf4b-af7b-416c-9c35-df103e320f31-config\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.548191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-client-ca\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.548584 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-config\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.551409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acbcf4b-af7b-416c-9c35-df103e320f31-serving-cert\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.551425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-serving-cert\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.562113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxgzb\" (UniqueName: \"kubernetes.io/projected/9acbcf4b-af7b-416c-9c35-df103e320f31-kube-api-access-vxgzb\") pod \"route-controller-manager-6777c49cdb-p99rs\" (UID: \"9acbcf4b-af7b-416c-9c35-df103e320f31\") " pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.567078 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbxsf\" (UniqueName: \"kubernetes.io/projected/f4d7dc5e-869b-4ff9-9511-9fcc04ec707a-kube-api-access-zbxsf\") pod \"controller-manager-988d4b47d-rgq9w\" (UID: \"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a\") " pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.586269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7575d9dcf4-vv2fk" event={"ID":"152960a0-1edd-4b0a-912b-c577cf58942c","Type":"ContainerStarted","Data":"4ab033af3fe0660e6819d1bf90b347052127fb85f6d29aa51f1060699abf3bcd"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.587690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"ddedd6f94a0bf6f248775a26a27463bd1a62e6c79d40b9854b5b01fc5f482f96"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.591403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblwf" event={"ID":"d7baac81-f46f-4e76-9333-95dcdc915c42","Type":"ContainerStarted","Data":"85b6e3f5c9052044d92a2312bfd38eb48e0cf145de9b466b3cdc88f34c8fa42c"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.593192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9xfg" event={"ID":"da72596c-78d5-40d7-99b1-282bb5bdeb6e","Type":"ContainerStarted","Data":"6288e98a1c14dcdb409eb6b0bc6af466ce6269401a9a6e87c5423a26c0df1aeb"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.595476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" event={"ID":"4ec5b344-1e65-4c9a-895c-f08dd626d231","Type":"ContainerStarted","Data":"538891c1a95e1e3c10a2152e0fa853632e51f747664338a01c458aaa73e62483"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.596721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" event={"ID":"97c8c4ba-9fe6-4dcf-ad81-676030c75b2b","Type":"ContainerDied","Data":"d2f36b57e13fe2d0c6fecd2b591a543263b23db1fce1a1c9caf69f99637f86af"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.596755 4792 scope.go:117] "RemoveContainer" containerID="968761cd3bae3200d2d76c7046040df941b15f48489433b0ec16cc9e0ee06af3" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.596754 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c788c996c-c6gq4" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.597709 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"706cb939a698c4d8baafd1d10a958c7e4fb96997641fb2274e6a38bd75917d97"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.599343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" event={"ID":"65f006d8-41ba-4902-92d1-866f080ef153","Type":"ContainerStarted","Data":"a965bcbfd81fa47e6ccf44de157829076ab876795ea925c359d9cc7045ff4de0"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.600028 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" event={"ID":"79015bb6-7792-4420-80a3-bfcc7da42a71","Type":"ContainerStarted","Data":"bb29a7a031324f4e2c764f060bb7f6253537bd64c92d0d18f036ec50b6bc3a92"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.601610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" event={"ID":"6923a9c3-34fb-43fb-a93b-19bef32e0b6f","Type":"ContainerStarted","Data":"43d0a9c99b9ccb3eeb9d9051443c33916afbf73138e248ed5ee45be34e7f7b5e"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.603442 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" event={"ID":"79fa2e58-3e1c-4021-bc1b-93c20da8b080","Type":"ContainerDied","Data":"00c085f5bf39494546f0283a98402576a9200366af0189c76fcfe977d6cd7dce"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.603526 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.610922 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4q8b7" event={"ID":"ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1","Type":"ContainerStarted","Data":"00325f38465f17ccc33f1b7014b825853d065303c24ca7eeafcc69b06a89c178"} Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.629957 4792 scope.go:117] "RemoveContainer" containerID="f2eef1f330949ce5139fc1100c1bc55a0e0b869bae417dd164f166aabe6c3d7b" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.654471 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pblwf" podStartSLOduration=2.864020613 podStartE2EDuration="6.654452753s" podCreationTimestamp="2026-02-16 21:44:00 +0000 UTC" firstStartedPulling="2026-02-16 21:44:01.546686773 +0000 UTC m=+374.199965664" lastFinishedPulling="2026-02-16 21:44:05.337118913 +0000 UTC m=+377.990397804" observedRunningTime="2026-02-16 21:44:06.654079353 +0000 UTC m=+379.307358254" watchObservedRunningTime="2026-02-16 21:44:06.654452753 +0000 UTC m=+379.307731634" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.670743 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hqfqw" podStartSLOduration=3.81169955 podStartE2EDuration="7.670723906s" podCreationTimestamp="2026-02-16 21:43:59 +0000 UTC" firstStartedPulling="2026-02-16 21:44:01.468375456 +0000 UTC m=+374.121654347" lastFinishedPulling="2026-02-16 21:44:05.327399822 +0000 UTC m=+377.980678703" observedRunningTime="2026-02-16 21:44:06.669707608 +0000 UTC m=+379.322986509" watchObservedRunningTime="2026-02-16 21:44:06.670723906 +0000 UTC m=+379.324002797" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.671765 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.679664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.697576 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4q8b7" podStartSLOduration=2.8087950619999997 podStartE2EDuration="7.697553582s" podCreationTimestamp="2026-02-16 21:43:59 +0000 UTC" firstStartedPulling="2026-02-16 21:44:00.474626303 +0000 UTC m=+373.127905194" lastFinishedPulling="2026-02-16 21:44:05.363384823 +0000 UTC m=+378.016663714" observedRunningTime="2026-02-16 21:44:06.694518018 +0000 UTC m=+379.347796919" watchObservedRunningTime="2026-02-16 21:44:06.697553582 +0000 UTC m=+379.350832473" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.718908 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-p6nt6" podStartSLOduration=4.661507939 podStartE2EDuration="7.718872145s" podCreationTimestamp="2026-02-16 21:43:59 +0000 UTC" firstStartedPulling="2026-02-16 21:44:00.073790187 +0000 UTC m=+372.727069078" lastFinishedPulling="2026-02-16 21:44:03.131154393 +0000 UTC m=+375.784433284" observedRunningTime="2026-02-16 21:44:06.717346242 +0000 UTC m=+379.370625143" watchObservedRunningTime="2026-02-16 21:44:06.718872145 +0000 UTC m=+379.372151036" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.745293 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9xfg" podStartSLOduration=3.947239696 podStartE2EDuration="9.745275559s" podCreationTimestamp="2026-02-16 21:43:57 +0000 UTC" firstStartedPulling="2026-02-16 21:43:59.451524285 +0000 UTC m=+372.104803186" lastFinishedPulling="2026-02-16 21:44:05.249560148 +0000 UTC m=+377.902839049" observedRunningTime="2026-02-16 21:44:06.741388711 +0000 UTC m=+379.394667622" watchObservedRunningTime="2026-02-16 21:44:06.745275559 +0000 UTC m=+379.398554450" Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.753123 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.757573 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c788c996c-c6gq4"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.764701 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:44:06 crc kubenswrapper[4792]: I0216 21:44:06.768694 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-kzjtw"] Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.121996 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-988d4b47d-rgq9w"] Feb 16 21:44:07 crc kubenswrapper[4792]: W0216 21:44:07.127183 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4d7dc5e_869b_4ff9_9511_9fcc04ec707a.slice/crio-765cba86ad1966ca5fc8a23375cca09cc0da08447ef787ded3f2b52d6aa9b05a WatchSource:0}: Error finding container 765cba86ad1966ca5fc8a23375cca09cc0da08447ef787ded3f2b52d6aa9b05a: Status 404 returned error can't find the container with id 765cba86ad1966ca5fc8a23375cca09cc0da08447ef787ded3f2b52d6aa9b05a Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.200992 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs"] Feb 16 21:44:07 crc kubenswrapper[4792]: W0216 21:44:07.207797 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acbcf4b_af7b_416c_9c35_df103e320f31.slice/crio-0ad852d8651b55ca00c0d82a513a2f216e07f2667d0bb2dba13eae8c9641c6eb WatchSource:0}: Error finding container 0ad852d8651b55ca00c0d82a513a2f216e07f2667d0bb2dba13eae8c9641c6eb: Status 404 returned error can't find the container with id 0ad852d8651b55ca00c0d82a513a2f216e07f2667d0bb2dba13eae8c9641c6eb Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.618833 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" event={"ID":"9acbcf4b-af7b-416c-9c35-df103e320f31","Type":"ContainerStarted","Data":"0ad852d8651b55ca00c0d82a513a2f216e07f2667d0bb2dba13eae8c9641c6eb"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.639321 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7575d9dcf4-vv2fk" event={"ID":"152960a0-1edd-4b0a-912b-c577cf58942c","Type":"ContainerStarted","Data":"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.650563 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd2c43d0-5333-4f78-96d3-9ed86ecfd602" containerID="ddedd6f94a0bf6f248775a26a27463bd1a62e6c79d40b9854b5b01fc5f482f96" exitCode=0 Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.650690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerDied","Data":"ddedd6f94a0bf6f248775a26a27463bd1a62e6c79d40b9854b5b01fc5f482f96"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.669774 4792 generic.go:334] "Generic (PLEG): container finished" podID="e26624cb-2d38-40b9-9750-03225048edc4" containerID="c4ebba50df7ed107bb4d5b5f82c6e1b90c45c885dca2d9bed1384c85a6af6fae" exitCode=0 Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.669927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerDied","Data":"c4ebba50df7ed107bb4d5b5f82c6e1b90c45c885dca2d9bed1384c85a6af6fae"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.676196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" event={"ID":"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a","Type":"ContainerStarted","Data":"7531e0cae04497320b66f06ce10d7d2d3fe3d720d574bea439d006a3aa8910f4"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.676768 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.676808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" event={"ID":"f4d7dc5e-869b-4ff9-9511-9fcc04ec707a","Type":"ContainerStarted","Data":"765cba86ad1966ca5fc8a23375cca09cc0da08447ef787ded3f2b52d6aa9b05a"} Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.707761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.726244 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.726304 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.732787 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7575d9dcf4-vv2fk" podStartSLOduration=3.732771548 podStartE2EDuration="3.732771548s" podCreationTimestamp="2026-02-16 21:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:44:07.681167933 +0000 UTC m=+380.334446844" watchObservedRunningTime="2026-02-16 21:44:07.732771548 +0000 UTC m=+380.386050439" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.781999 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-988d4b47d-rgq9w" podStartSLOduration=3.781981646 podStartE2EDuration="3.781981646s" podCreationTimestamp="2026-02-16 21:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:44:07.778676534 +0000 UTC m=+380.431955445" watchObservedRunningTime="2026-02-16 21:44:07.781981646 +0000 UTC m=+380.435260537" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.792075 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.915848 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:44:07 crc kubenswrapper[4792]: I0216 21:44:07.915905 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:44:08 crc kubenswrapper[4792]: I0216 21:44:08.032804 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fa2e58-3e1c-4021-bc1b-93c20da8b080" path="/var/lib/kubelet/pods/79fa2e58-3e1c-4021-bc1b-93c20da8b080/volumes" Feb 16 21:44:08 crc kubenswrapper[4792]: I0216 21:44:08.033575 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c8c4ba-9fe6-4dcf-ad81-676030c75b2b" path="/var/lib/kubelet/pods/97c8c4ba-9fe6-4dcf-ad81-676030c75b2b/volumes" Feb 16 21:44:08 crc kubenswrapper[4792]: I0216 21:44:08.737168 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmzts" Feb 16 21:44:08 crc kubenswrapper[4792]: I0216 21:44:08.987217 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9xfg" podUID="da72596c-78d5-40d7-99b1-282bb5bdeb6e" containerName="registry-server" probeResult="failure" output=< Feb 16 21:44:08 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 21:44:08 crc kubenswrapper[4792]: > Feb 16 21:44:09 crc kubenswrapper[4792]: I0216 21:44:09.547661 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:44:09 crc kubenswrapper[4792]: I0216 21:44:09.548055 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:44:09 crc kubenswrapper[4792]: I0216 21:44:09.604891 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:44:09 crc kubenswrapper[4792]: I0216 21:44:09.692082 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" event={"ID":"9acbcf4b-af7b-416c-9c35-df103e320f31","Type":"ContainerStarted","Data":"54ab044a55a9324519166d5c33096d207faf80ec465dceb2243bc2cc5b609e3a"} Feb 16 21:44:09 crc kubenswrapper[4792]: I0216 21:44:09.720003 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" podStartSLOduration=5.719981486 podStartE2EDuration="5.719981486s" podCreationTimestamp="2026-02-16 21:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:44:09.714274466 +0000 UTC m=+382.367553387" watchObservedRunningTime="2026-02-16 21:44:09.719981486 +0000 UTC m=+382.373260377" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.538178 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.538241 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.579460 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.710331 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.717815 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6777c49cdb-p99rs" Feb 16 21:44:10 crc kubenswrapper[4792]: I0216 21:44:10.750704 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pblwf" Feb 16 21:44:14 crc kubenswrapper[4792]: I0216 21:44:14.441305 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:14 crc kubenswrapper[4792]: I0216 21:44:14.441897 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:14 crc kubenswrapper[4792]: I0216 21:44:14.447884 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:14 crc kubenswrapper[4792]: I0216 21:44:14.746692 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:44:14 crc kubenswrapper[4792]: I0216 21:44:14.812294 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.745588 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" event={"ID":"65f006d8-41ba-4902-92d1-866f080ef153","Type":"ContainerStarted","Data":"ef6cbcba1759496bf65a674252c368f0bfdfb746fdb5667a8920c28bc6bd7112"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.747641 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" event={"ID":"79015bb6-7792-4420-80a3-bfcc7da42a71","Type":"ContainerStarted","Data":"067c2b9c59ba0c339cbae1d30efdc6f5df4510a7456b1e59ebd36797b5b79926"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.748042 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.750088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"a3a164bfe359ec1b070f28fa2c53b1dd8d367d6069c0ea43295731aa89403121"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.750135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"100aaf389d95fd8b9ee191a4bfe4f453568bd25f580ce6bb340741eb672cb2f8"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.760520 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"86d81d17aee5f81a9b73a9efdbabbd60ab2411f6bb247912cc1434fb58378dd1"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.772067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"78f6739088d8b002449b9284becd9c669971ec9cfc16d96d3bad044d16691e22"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.775410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"48036e25e7e62537fad372bf41a3d86ac5507e74dc6dc3041bab934ec6aca852"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.775501 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"e187c680fdf22d9b2bbddbe9563e9ccfefa75166a1fb14e893f502971c2bc4ba"} Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.775727 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.789862 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" podStartSLOduration=2.48312995 podStartE2EDuration="11.789829798s" podCreationTimestamp="2026-02-16 21:44:04 +0000 UTC" firstStartedPulling="2026-02-16 21:44:05.839257636 +0000 UTC m=+378.492536527" lastFinishedPulling="2026-02-16 21:44:15.145957484 +0000 UTC m=+387.799236375" observedRunningTime="2026-02-16 21:44:15.785665682 +0000 UTC m=+388.438944573" watchObservedRunningTime="2026-02-16 21:44:15.789829798 +0000 UTC m=+388.443108689" Feb 16 21:44:15 crc kubenswrapper[4792]: I0216 21:44:15.804347 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5fc6555665-ccwpw" podStartSLOduration=1.809718201 podStartE2EDuration="10.804330211s" podCreationTimestamp="2026-02-16 21:44:05 +0000 UTC" firstStartedPulling="2026-02-16 21:44:06.127412229 +0000 UTC m=+378.780691120" lastFinishedPulling="2026-02-16 21:44:15.122024249 +0000 UTC m=+387.775303130" observedRunningTime="2026-02-16 21:44:15.801988095 +0000 UTC m=+388.455266986" watchObservedRunningTime="2026-02-16 21:44:15.804330211 +0000 UTC m=+388.457609102" Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.784242 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"8311879c3900863b23c3aebea712e76a42b16fb760c5931264b08aa89095916e"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.788184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"ba7693183efc52883394f9fe5ca69a07aa57bcee7adb2772755dca59840cc59e"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.788211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"9e299793e018f36ecce1fec3fac0d933f4ae2015fd5b83ec66d2cd443498b17d"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.788224 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"9af5546e4a6ccf632bd82f807ea859bc3ebd5d4a02e9975b16d17e08e452db44"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.792245 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"2bc0e85e6a287206b0fd8a198616526b9783ea81b4caf3ffc9667395b51a10d6"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.792300 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"5e58618349fc8108dff6332c0e320040352d6d3e8af55ab1d7fb305aa887005f"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.792312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"574ee174be544f53aba420c3df36ce1a9e73198cecb569ee2091fd04c55b45c1"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.792324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e26624cb-2d38-40b9-9750-03225048edc4","Type":"ContainerStarted","Data":"876e1bc890c6581c03718bac297c9cb4334aa52d0919257e236d73afc05c7176"} Feb 16 21:44:16 crc kubenswrapper[4792]: I0216 21:44:16.821305 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.298444463 podStartE2EDuration="11.821286969s" podCreationTimestamp="2026-02-16 21:44:05 +0000 UTC" firstStartedPulling="2026-02-16 21:44:07.67314911 +0000 UTC m=+380.326428001" lastFinishedPulling="2026-02-16 21:44:15.195991616 +0000 UTC m=+387.849270507" observedRunningTime="2026-02-16 21:44:16.817557345 +0000 UTC m=+389.470836266" watchObservedRunningTime="2026-02-16 21:44:16.821286969 +0000 UTC m=+389.474565870" Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.804908 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"4490b09840f223676dcdfa52ab61fb5be28ce6d483adc818fc1cdb152e107e52"} Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.804966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"94a45b322c7523ed5852cccfbbbe46fa6aded2e2908e8e4d9db172844385dae2"} Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.804986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" event={"ID":"90fa52da-61b8-4afc-9e6f-52112bb14dea","Type":"ContainerStarted","Data":"ee64e1b358749ac342ecd091e0e80d0cf3cee24dd516c9d9f2dbbc2161f78ffc"} Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.805089 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.809797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"bd2c43d0-5333-4f78-96d3-9ed86ecfd602","Type":"ContainerStarted","Data":"95168d8ca52266e0b9dd26c00d9860cce7af6bb652382c803e73b04b827097e6"} Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.836694 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" podStartSLOduration=2.979381209 podStartE2EDuration="16.836670814s" podCreationTimestamp="2026-02-16 21:44:01 +0000 UTC" firstStartedPulling="2026-02-16 21:44:02.951539577 +0000 UTC m=+375.604818468" lastFinishedPulling="2026-02-16 21:44:16.808829172 +0000 UTC m=+389.462108073" observedRunningTime="2026-02-16 21:44:17.832114216 +0000 UTC m=+390.485393147" watchObservedRunningTime="2026-02-16 21:44:17.836670814 +0000 UTC m=+390.489949715" Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.889066 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.280637162 podStartE2EDuration="17.88903317s" podCreationTimestamp="2026-02-16 21:44:00 +0000 UTC" firstStartedPulling="2026-02-16 21:44:01.20422657 +0000 UTC m=+373.857505461" lastFinishedPulling="2026-02-16 21:44:16.812622578 +0000 UTC m=+389.465901469" observedRunningTime="2026-02-16 21:44:17.860571338 +0000 UTC m=+390.513850259" watchObservedRunningTime="2026-02-16 21:44:17.88903317 +0000 UTC m=+390.542312081" Feb 16 21:44:17 crc kubenswrapper[4792]: I0216 21:44:17.962542 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:44:18 crc kubenswrapper[4792]: I0216 21:44:18.003887 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9xfg" Feb 16 21:44:19 crc kubenswrapper[4792]: I0216 21:44:19.620452 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4q8b7" Feb 16 21:44:21 crc kubenswrapper[4792]: I0216 21:44:21.073135 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:44:21 crc kubenswrapper[4792]: I0216 21:44:21.739434 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-77f559c558-dggk9" Feb 16 21:44:24 crc kubenswrapper[4792]: I0216 21:44:24.797629 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:24 crc kubenswrapper[4792]: I0216 21:44:24.798145 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:30 crc kubenswrapper[4792]: I0216 21:44:30.890675 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" podUID="abd983af-64e8-4770-842c-9335c49ae36d" containerName="registry" containerID="cri-o://33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f" gracePeriod=30 Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.415428 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.517303 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.517391 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.517424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.517491 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4v2p\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.518485 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.518869 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.518948 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.518989 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.519013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets\") pod \"abd983af-64e8-4770-842c-9335c49ae36d\" (UID: \"abd983af-64e8-4770-842c-9335c49ae36d\") " Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.519267 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.520620 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.524007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.524031 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.525518 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.527197 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p" (OuterVolumeSpecName: "kube-api-access-v4v2p") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "kube-api-access-v4v2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.530919 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.532200 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.532255 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.532301 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.533157 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.533232 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f" gracePeriod=600 Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.534870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "abd983af-64e8-4770-842c-9335c49ae36d" (UID: "abd983af-64e8-4770-842c-9335c49ae36d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.620574 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/abd983af-64e8-4770-842c-9335c49ae36d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.620973 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/abd983af-64e8-4770-842c-9335c49ae36d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.620993 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.621012 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4v2p\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-kube-api-access-v4v2p\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.621028 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abd983af-64e8-4770-842c-9335c49ae36d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.621047 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/abd983af-64e8-4770-842c-9335c49ae36d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.912056 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f" exitCode=0 Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.912119 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f"} Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.912146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b"} Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.912163 4792 scope.go:117] "RemoveContainer" containerID="3e4b8adf82df561e483106cc812a74c465b4e28d95c8aaf2c364b18463361a3b" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.915817 4792 generic.go:334] "Generic (PLEG): container finished" podID="abd983af-64e8-4770-842c-9335c49ae36d" containerID="33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f" exitCode=0 Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.915908 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.916200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" event={"ID":"abd983af-64e8-4770-842c-9335c49ae36d","Type":"ContainerDied","Data":"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f"} Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.916303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cpksb" event={"ID":"abd983af-64e8-4770-842c-9335c49ae36d","Type":"ContainerDied","Data":"0d903b8cda092b0bf6f174e9f4f617971d20c2b847bfad6a66bcac797ed6f290"} Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.967204 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.969991 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cpksb"] Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.971530 4792 scope.go:117] "RemoveContainer" containerID="33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.992532 4792 scope.go:117] "RemoveContainer" containerID="33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f" Feb 16 21:44:31 crc kubenswrapper[4792]: E0216 21:44:31.993018 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f\": container with ID starting with 33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f not found: ID does not exist" containerID="33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f" Feb 16 21:44:31 crc kubenswrapper[4792]: I0216 21:44:31.993060 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f"} err="failed to get container status \"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f\": rpc error: code = NotFound desc = could not find container \"33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f\": container with ID starting with 33056d1cd5889195eb663475b675d76b9e4be8479a9210d1696e60604c62355f not found: ID does not exist" Feb 16 21:44:32 crc kubenswrapper[4792]: I0216 21:44:32.035243 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd983af-64e8-4770-842c-9335c49ae36d" path="/var/lib/kubelet/pods/abd983af-64e8-4770-842c-9335c49ae36d/volumes" Feb 16 21:44:39 crc kubenswrapper[4792]: I0216 21:44:39.857143 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tr7np" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerName="console" containerID="cri-o://5be3df284be45201565d60b10dd1695a50b44f354cb8f327798cb7ea7946fdd8" gracePeriod=15 Feb 16 21:44:39 crc kubenswrapper[4792]: I0216 21:44:39.991456 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tr7np_ae243370-753c-48cb-b885-b4bf62dd55ef/console/0.log" Feb 16 21:44:39 crc kubenswrapper[4792]: I0216 21:44:39.991504 4792 generic.go:334] "Generic (PLEG): container finished" podID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerID="5be3df284be45201565d60b10dd1695a50b44f354cb8f327798cb7ea7946fdd8" exitCode=2 Feb 16 21:44:39 crc kubenswrapper[4792]: I0216 21:44:39.991532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tr7np" event={"ID":"ae243370-753c-48cb-b885-b4bf62dd55ef","Type":"ContainerDied","Data":"5be3df284be45201565d60b10dd1695a50b44f354cb8f327798cb7ea7946fdd8"} Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.362054 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tr7np_ae243370-753c-48cb-b885-b4bf62dd55ef/console/0.log" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.362394 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.453829 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.453899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.453971 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.454045 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.454093 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.454121 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.454179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss5qk\" (UniqueName: \"kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk\") pod \"ae243370-753c-48cb-b885-b4bf62dd55ef\" (UID: \"ae243370-753c-48cb-b885-b4bf62dd55ef\") " Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.455061 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.455079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.455095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config" (OuterVolumeSpecName: "console-config") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.455982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.460881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk" (OuterVolumeSpecName: "kube-api-access-ss5qk") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "kube-api-access-ss5qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.461119 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.462499 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ae243370-753c-48cb-b885-b4bf62dd55ef" (UID: "ae243370-753c-48cb-b885-b4bf62dd55ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555780 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555818 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555831 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555845 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss5qk\" (UniqueName: \"kubernetes.io/projected/ae243370-753c-48cb-b885-b4bf62dd55ef-kube-api-access-ss5qk\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555859 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555870 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae243370-753c-48cb-b885-b4bf62dd55ef-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:40 crc kubenswrapper[4792]: I0216 21:44:40.555888 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae243370-753c-48cb-b885-b4bf62dd55ef-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.002635 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tr7np_ae243370-753c-48cb-b885-b4bf62dd55ef/console/0.log" Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.002757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tr7np" event={"ID":"ae243370-753c-48cb-b885-b4bf62dd55ef","Type":"ContainerDied","Data":"20a6657a3e57b1a45009c81520001761880fd37d6b7fa5d1089235f17867d265"} Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.002786 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tr7np" Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.002812 4792 scope.go:117] "RemoveContainer" containerID="5be3df284be45201565d60b10dd1695a50b44f354cb8f327798cb7ea7946fdd8" Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.046674 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:44:41 crc kubenswrapper[4792]: I0216 21:44:41.051447 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tr7np"] Feb 16 21:44:42 crc kubenswrapper[4792]: I0216 21:44:42.041972 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" path="/var/lib/kubelet/pods/ae243370-753c-48cb-b885-b4bf62dd55ef/volumes" Feb 16 21:44:44 crc kubenswrapper[4792]: I0216 21:44:44.804956 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:44:44 crc kubenswrapper[4792]: I0216 21:44:44.809410 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6bd8fbb5df-dkthz" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.178704 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw"] Feb 16 21:45:00 crc kubenswrapper[4792]: E0216 21:45:00.179983 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerName="console" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.180016 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerName="console" Feb 16 21:45:00 crc kubenswrapper[4792]: E0216 21:45:00.180135 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd983af-64e8-4770-842c-9335c49ae36d" containerName="registry" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.180156 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd983af-64e8-4770-842c-9335c49ae36d" containerName="registry" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.180418 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae243370-753c-48cb-b885-b4bf62dd55ef" containerName="console" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.180460 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd983af-64e8-4770-842c-9335c49ae36d" containerName="registry" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.181346 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.184554 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw"] Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.184716 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.185713 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.283490 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.283534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.283569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxczc\" (UniqueName: \"kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.385020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.385300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.385422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxczc\" (UniqueName: \"kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.386775 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.393394 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.402954 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxczc\" (UniqueName: \"kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc\") pod \"collect-profiles-29521305-69chw\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.501020 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:00 crc kubenswrapper[4792]: I0216 21:45:00.932329 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw"] Feb 16 21:45:01 crc kubenswrapper[4792]: I0216 21:45:01.153063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" event={"ID":"724f6800-0c88-4704-b4fe-a7a3df7b7783","Type":"ContainerStarted","Data":"864c464d1808ca9d4ac750e3ed44001320159ce91f51f5af29620dda2adc4352"} Feb 16 21:45:01 crc kubenswrapper[4792]: I0216 21:45:01.153104 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" event={"ID":"724f6800-0c88-4704-b4fe-a7a3df7b7783","Type":"ContainerStarted","Data":"2310d9c778a5dcf9aa934ae264b9dc019caa75b4e4936dcd4f24409550d41dd1"} Feb 16 21:45:01 crc kubenswrapper[4792]: I0216 21:45:01.171417 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" podStartSLOduration=1.171400913 podStartE2EDuration="1.171400913s" podCreationTimestamp="2026-02-16 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:45:01.167279778 +0000 UTC m=+433.820558679" watchObservedRunningTime="2026-02-16 21:45:01.171400913 +0000 UTC m=+433.824679804" Feb 16 21:45:02 crc kubenswrapper[4792]: I0216 21:45:02.163506 4792 generic.go:334] "Generic (PLEG): container finished" podID="724f6800-0c88-4704-b4fe-a7a3df7b7783" containerID="864c464d1808ca9d4ac750e3ed44001320159ce91f51f5af29620dda2adc4352" exitCode=0 Feb 16 21:45:02 crc kubenswrapper[4792]: I0216 21:45:02.163576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" event={"ID":"724f6800-0c88-4704-b4fe-a7a3df7b7783","Type":"ContainerDied","Data":"864c464d1808ca9d4ac750e3ed44001320159ce91f51f5af29620dda2adc4352"} Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.432267 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.538014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume\") pod \"724f6800-0c88-4704-b4fe-a7a3df7b7783\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.538070 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume\") pod \"724f6800-0c88-4704-b4fe-a7a3df7b7783\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.538189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxczc\" (UniqueName: \"kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc\") pod \"724f6800-0c88-4704-b4fe-a7a3df7b7783\" (UID: \"724f6800-0c88-4704-b4fe-a7a3df7b7783\") " Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.538777 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume" (OuterVolumeSpecName: "config-volume") pod "724f6800-0c88-4704-b4fe-a7a3df7b7783" (UID: "724f6800-0c88-4704-b4fe-a7a3df7b7783"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.544406 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc" (OuterVolumeSpecName: "kube-api-access-kxczc") pod "724f6800-0c88-4704-b4fe-a7a3df7b7783" (UID: "724f6800-0c88-4704-b4fe-a7a3df7b7783"). InnerVolumeSpecName "kube-api-access-kxczc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.544443 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "724f6800-0c88-4704-b4fe-a7a3df7b7783" (UID: "724f6800-0c88-4704-b4fe-a7a3df7b7783"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.640346 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/724f6800-0c88-4704-b4fe-a7a3df7b7783-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.640392 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/724f6800-0c88-4704-b4fe-a7a3df7b7783-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:03 crc kubenswrapper[4792]: I0216 21:45:03.640404 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxczc\" (UniqueName: \"kubernetes.io/projected/724f6800-0c88-4704-b4fe-a7a3df7b7783-kube-api-access-kxczc\") on node \"crc\" DevicePath \"\"" Feb 16 21:45:04 crc kubenswrapper[4792]: I0216 21:45:04.176887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" event={"ID":"724f6800-0c88-4704-b4fe-a7a3df7b7783","Type":"ContainerDied","Data":"2310d9c778a5dcf9aa934ae264b9dc019caa75b4e4936dcd4f24409550d41dd1"} Feb 16 21:45:04 crc kubenswrapper[4792]: I0216 21:45:04.176944 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2310d9c778a5dcf9aa934ae264b9dc019caa75b4e4936dcd4f24409550d41dd1" Feb 16 21:45:04 crc kubenswrapper[4792]: I0216 21:45:04.176988 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw" Feb 16 21:45:06 crc kubenswrapper[4792]: I0216 21:45:06.073716 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:45:06 crc kubenswrapper[4792]: I0216 21:45:06.122002 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:45:06 crc kubenswrapper[4792]: I0216 21:45:06.222794 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.441094 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:45:24 crc kubenswrapper[4792]: E0216 21:45:24.442057 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724f6800-0c88-4704-b4fe-a7a3df7b7783" containerName="collect-profiles" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.442081 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f6800-0c88-4704-b4fe-a7a3df7b7783" containerName="collect-profiles" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.442305 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="724f6800-0c88-4704-b4fe-a7a3df7b7783" containerName="collect-profiles" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.442947 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.466203 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598585 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598713 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.598784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r254\" (UniqueName: \"kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703491 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703614 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703649 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.703759 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r254\" (UniqueName: \"kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.705438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.706243 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.707212 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.708468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.714433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.714698 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.732338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r254\" (UniqueName: \"kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254\") pod \"console-5fb8cfd5f8-fjn25\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:24 crc kubenswrapper[4792]: I0216 21:45:24.779044 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:25 crc kubenswrapper[4792]: I0216 21:45:25.007064 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:45:25 crc kubenswrapper[4792]: I0216 21:45:25.330751 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fb8cfd5f8-fjn25" event={"ID":"070b7637-8d35-4fd2-82a5-91b32097015b","Type":"ContainerStarted","Data":"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06"} Feb 16 21:45:25 crc kubenswrapper[4792]: I0216 21:45:25.331175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fb8cfd5f8-fjn25" event={"ID":"070b7637-8d35-4fd2-82a5-91b32097015b","Type":"ContainerStarted","Data":"071b6d3c8c9987844894df26a1b6b0cd87f20615bd20c1e791f6480979f1f562"} Feb 16 21:45:25 crc kubenswrapper[4792]: I0216 21:45:25.366643 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5fb8cfd5f8-fjn25" podStartSLOduration=1.366591962 podStartE2EDuration="1.366591962s" podCreationTimestamp="2026-02-16 21:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:45:25.361885922 +0000 UTC m=+458.015164883" watchObservedRunningTime="2026-02-16 21:45:25.366591962 +0000 UTC m=+458.019870893" Feb 16 21:45:34 crc kubenswrapper[4792]: I0216 21:45:34.779914 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:34 crc kubenswrapper[4792]: I0216 21:45:34.789796 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:34 crc kubenswrapper[4792]: I0216 21:45:34.796786 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:35 crc kubenswrapper[4792]: I0216 21:45:35.417228 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:45:35 crc kubenswrapper[4792]: I0216 21:45:35.553163 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:46:00 crc kubenswrapper[4792]: I0216 21:46:00.595552 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7575d9dcf4-vv2fk" podUID="152960a0-1edd-4b0a-912b-c577cf58942c" containerName="console" containerID="cri-o://377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba" gracePeriod=15 Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.053507 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7575d9dcf4-vv2fk_152960a0-1edd-4b0a-912b-c577cf58942c/console/0.log" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.053852 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.115827 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp8ff\" (UniqueName: \"kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.115892 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.115931 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.115955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.115998 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.116023 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.116104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config\") pod \"152960a0-1edd-4b0a-912b-c577cf58942c\" (UID: \"152960a0-1edd-4b0a-912b-c577cf58942c\") " Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.118122 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.118401 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca" (OuterVolumeSpecName: "service-ca") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.118584 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.119200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config" (OuterVolumeSpecName: "console-config") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.124022 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.132850 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff" (OuterVolumeSpecName: "kube-api-access-vp8ff") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "kube-api-access-vp8ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.135809 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "152960a0-1edd-4b0a-912b-c577cf58942c" (UID: "152960a0-1edd-4b0a-912b-c577cf58942c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217699 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217750 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217768 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/152960a0-1edd-4b0a-912b-c577cf58942c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217785 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217803 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp8ff\" (UniqueName: \"kubernetes.io/projected/152960a0-1edd-4b0a-912b-c577cf58942c-kube-api-access-vp8ff\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217823 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.217840 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/152960a0-1edd-4b0a-912b-c577cf58942c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596447 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7575d9dcf4-vv2fk_152960a0-1edd-4b0a-912b-c577cf58942c/console/0.log" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596539 4792 generic.go:334] "Generic (PLEG): container finished" podID="152960a0-1edd-4b0a-912b-c577cf58942c" containerID="377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba" exitCode=2 Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596587 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7575d9dcf4-vv2fk" event={"ID":"152960a0-1edd-4b0a-912b-c577cf58942c","Type":"ContainerDied","Data":"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba"} Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7575d9dcf4-vv2fk" event={"ID":"152960a0-1edd-4b0a-912b-c577cf58942c","Type":"ContainerDied","Data":"4ab033af3fe0660e6819d1bf90b347052127fb85f6d29aa51f1060699abf3bcd"} Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596688 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7575d9dcf4-vv2fk" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.596717 4792 scope.go:117] "RemoveContainer" containerID="377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.633905 4792 scope.go:117] "RemoveContainer" containerID="377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba" Feb 16 21:46:01 crc kubenswrapper[4792]: E0216 21:46:01.634412 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba\": container with ID starting with 377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba not found: ID does not exist" containerID="377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.634487 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba"} err="failed to get container status \"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba\": rpc error: code = NotFound desc = could not find container \"377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba\": container with ID starting with 377b2b61178c62c4500ff44b0d7dcddedad2b3e794c0b9062e03fc2e385c5cba not found: ID does not exist" Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.651721 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:46:01 crc kubenswrapper[4792]: I0216 21:46:01.660197 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7575d9dcf4-vv2fk"] Feb 16 21:46:02 crc kubenswrapper[4792]: I0216 21:46:02.042519 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152960a0-1edd-4b0a-912b-c577cf58942c" path="/var/lib/kubelet/pods/152960a0-1edd-4b0a-912b-c577cf58942c/volumes" Feb 16 21:46:31 crc kubenswrapper[4792]: I0216 21:46:31.532826 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:46:31 crc kubenswrapper[4792]: I0216 21:46:31.533369 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:47:01 crc kubenswrapper[4792]: I0216 21:47:01.532389 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:47:01 crc kubenswrapper[4792]: I0216 21:47:01.532905 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.396327 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq"] Feb 16 21:47:08 crc kubenswrapper[4792]: E0216 21:47:08.397457 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152960a0-1edd-4b0a-912b-c577cf58942c" containerName="console" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.397478 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="152960a0-1edd-4b0a-912b-c577cf58942c" containerName="console" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.397670 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="152960a0-1edd-4b0a-912b-c577cf58942c" containerName="console" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.398730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.400740 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.412358 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq"] Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.595583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.596110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgth\" (UniqueName: \"kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.596169 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.697360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgth\" (UniqueName: \"kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.697469 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.697547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.698777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.698872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:08 crc kubenswrapper[4792]: I0216 21:47:08.733548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgth\" (UniqueName: \"kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:09 crc kubenswrapper[4792]: I0216 21:47:09.029062 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:09 crc kubenswrapper[4792]: I0216 21:47:09.480142 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq"] Feb 16 21:47:09 crc kubenswrapper[4792]: W0216 21:47:09.486495 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfb5cd53_4f38_4b74_98ba_d9e0107fef18.slice/crio-f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd WatchSource:0}: Error finding container f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd: Status 404 returned error can't find the container with id f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd Feb 16 21:47:10 crc kubenswrapper[4792]: I0216 21:47:10.087514 4792 generic.go:334] "Generic (PLEG): container finished" podID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerID="b33ef2e9c9174fd1b32ef5a52a9471023e82d2a969a181d8c80b87de9a77f81d" exitCode=0 Feb 16 21:47:10 crc kubenswrapper[4792]: I0216 21:47:10.087633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" event={"ID":"cfb5cd53-4f38-4b74-98ba-d9e0107fef18","Type":"ContainerDied","Data":"b33ef2e9c9174fd1b32ef5a52a9471023e82d2a969a181d8c80b87de9a77f81d"} Feb 16 21:47:10 crc kubenswrapper[4792]: I0216 21:47:10.087818 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" event={"ID":"cfb5cd53-4f38-4b74-98ba-d9e0107fef18","Type":"ContainerStarted","Data":"f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd"} Feb 16 21:47:10 crc kubenswrapper[4792]: I0216 21:47:10.089102 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:47:12 crc kubenswrapper[4792]: I0216 21:47:12.105846 4792 generic.go:334] "Generic (PLEG): container finished" podID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerID="3bb5523d7f495a4f95053ba0fc755e8336a41e13603136a734b1c33ae9947a5c" exitCode=0 Feb 16 21:47:12 crc kubenswrapper[4792]: I0216 21:47:12.106004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" event={"ID":"cfb5cd53-4f38-4b74-98ba-d9e0107fef18","Type":"ContainerDied","Data":"3bb5523d7f495a4f95053ba0fc755e8336a41e13603136a734b1c33ae9947a5c"} Feb 16 21:47:13 crc kubenswrapper[4792]: I0216 21:47:13.120373 4792 generic.go:334] "Generic (PLEG): container finished" podID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerID="656dcdafade4f4065a06e8937ac41353fb931370429b5a559dc35e8071e1af6a" exitCode=0 Feb 16 21:47:13 crc kubenswrapper[4792]: I0216 21:47:13.120547 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" event={"ID":"cfb5cd53-4f38-4b74-98ba-d9e0107fef18","Type":"ContainerDied","Data":"656dcdafade4f4065a06e8937ac41353fb931370429b5a559dc35e8071e1af6a"} Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.345134 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.386101 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgth\" (UniqueName: \"kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth\") pod \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.386353 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle\") pod \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.386425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util\") pod \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\" (UID: \"cfb5cd53-4f38-4b74-98ba-d9e0107fef18\") " Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.392902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth" (OuterVolumeSpecName: "kube-api-access-8fgth") pod "cfb5cd53-4f38-4b74-98ba-d9e0107fef18" (UID: "cfb5cd53-4f38-4b74-98ba-d9e0107fef18"). InnerVolumeSpecName "kube-api-access-8fgth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.398924 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle" (OuterVolumeSpecName: "bundle") pod "cfb5cd53-4f38-4b74-98ba-d9e0107fef18" (UID: "cfb5cd53-4f38-4b74-98ba-d9e0107fef18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.412424 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util" (OuterVolumeSpecName: "util") pod "cfb5cd53-4f38-4b74-98ba-d9e0107fef18" (UID: "cfb5cd53-4f38-4b74-98ba-d9e0107fef18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.487746 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgth\" (UniqueName: \"kubernetes.io/projected/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-kube-api-access-8fgth\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.487785 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:14 crc kubenswrapper[4792]: I0216 21:47:14.487794 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfb5cd53-4f38-4b74-98ba-d9e0107fef18-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:15 crc kubenswrapper[4792]: I0216 21:47:15.137454 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" event={"ID":"cfb5cd53-4f38-4b74-98ba-d9e0107fef18","Type":"ContainerDied","Data":"f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd"} Feb 16 21:47:15 crc kubenswrapper[4792]: I0216 21:47:15.137860 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f128f4e34ebbd9f0e8c4937f6754c2554126b1a8a65cb92ed885fad3640bb1bd" Feb 16 21:47:15 crc kubenswrapper[4792]: I0216 21:47:15.137523 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq" Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.755693 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rfdc5"] Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.756712 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-controller" containerID="cri-o://7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.756758 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="nbdb" containerID="cri-o://5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.756826 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.756896 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-acl-logging" containerID="cri-o://3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.756876 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-node" containerID="cri-o://4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.757032 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="northd" containerID="cri-o://c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.757128 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="sbdb" containerID="cri-o://279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" gracePeriod=30 Feb 16 21:47:19 crc kubenswrapper[4792]: I0216 21:47:19.807662 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" containerID="cri-o://4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" gracePeriod=30 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.169290 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/2.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.169752 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/1.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.169794 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" containerID="664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436" exitCode=2 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.169828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerDied","Data":"664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.169886 4792 scope.go:117] "RemoveContainer" containerID="363b21e1b825a17933c30acdeb622e40cfa974bddd490fbc8d6d676d12a17838" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.170402 4792 scope.go:117] "RemoveContainer" containerID="664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436" Feb 16 21:47:20 crc kubenswrapper[4792]: E0216 21:47:20.170833 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mp8ql_openshift-multus(3f2095e9-5a78-45fb-a930-eacbd54ec73d)\"" pod="openshift-multus/multus-mp8ql" podUID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.171939 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovnkube-controller/3.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.174382 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-acl-logging/0.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.174900 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-controller/0.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175499 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" exitCode=0 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175526 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" exitCode=0 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175537 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" exitCode=0 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175548 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" exitCode=0 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175557 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" exitCode=143 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175566 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" exitCode=143 Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175568 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175647 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175660 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.175691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736"} Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.188807 4792 scope.go:117] "RemoveContainer" containerID="3276e38948b603f587c09c3f3f6a4078f5e7bf192b20cba2dc4da7db72500f5c" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.933066 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-acl-logging/0.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.934765 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-controller/0.log" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.935282 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986022 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986112 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986159 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986249 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986301 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vfrl\" (UniqueName: \"kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986407 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986433 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986450 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986481 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986496 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986531 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986570 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986659 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986698 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986726 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986753 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986794 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"616c8c01-b6e2-4851-9729-888790cbbe63\" (UID: \"616c8c01-b6e2-4851-9729-888790cbbe63\") " Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.986983 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987243 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash" (OuterVolumeSpecName: "host-slash") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987267 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987368 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987409 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987489 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987467 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket" (OuterVolumeSpecName: "log-socket") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987804 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log" (OuterVolumeSpecName: "node-log") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.987832 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988405 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988507 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988856 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988900 4792 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988914 4792 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988928 4792 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988942 4792 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988952 4792 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988963 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988974 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988984 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.988994 4792 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989004 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989015 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989026 4792 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989037 4792 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989047 4792 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989060 4792 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:20 crc kubenswrapper[4792]: I0216 21:47:20.989075 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/616c8c01-b6e2-4851-9729-888790cbbe63-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.000519 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.005825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl" (OuterVolumeSpecName: "kube-api-access-5vfrl") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "kube-api-access-5vfrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.006752 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mhlc8"] Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.007338 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="extract" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.007422 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="extract" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.007486 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="util" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.007543 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="util" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.007686 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.007756 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.007845 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.007903 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.007955 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008001 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.008054 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="sbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008101 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="sbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.008175 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kubecfg-setup" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008242 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kubecfg-setup" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.008312 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-node" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008382 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-node" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.008439 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="northd" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008494 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="northd" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.008552 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="nbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.008619 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="nbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.009024 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="pull" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009202 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="pull" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.009273 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009330 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.009388 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-acl-logging" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009436 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-acl-logging" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.009516 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009570 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.009653 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009703 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009901 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.009997 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010057 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-node" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010128 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010189 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="northd" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010253 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb5cd53-4f38-4b74-98ba-d9e0107fef18" containerName="extract" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010689 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-acl-logging" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010744 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovn-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.010796 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="sbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.011015 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="nbdb" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.011089 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.011268 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.011324 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.011480 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.011539 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" containerName="ovnkube-controller" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.014477 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.022524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "616c8c01-b6e2-4851-9729-888790cbbe63" (UID: "616c8c01-b6e2-4851-9729-888790cbbe63"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-env-overrides\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-netns\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-kubelet\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091579 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-etc-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-node-log\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091641 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvkqh\" (UniqueName: \"kubernetes.io/projected/b458d59d-b2ab-435c-adbe-9afff834455d-kube-api-access-mvkqh\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091659 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-netd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-script-lib\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091743 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b458d59d-b2ab-435c-adbe-9afff834455d-ovn-node-metrics-cert\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-bin\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-var-lib-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091797 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-systemd-units\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-config\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091839 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-systemd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-log-socket\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091875 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-slash\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091892 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-ovn\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091935 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/616c8c01-b6e2-4851-9729-888790cbbe63-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091945 4792 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/616c8c01-b6e2-4851-9729-888790cbbe63-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.091954 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vfrl\" (UniqueName: \"kubernetes.io/projected/616c8c01-b6e2-4851-9729-888790cbbe63-kube-api-access-5vfrl\") on node \"crc\" DevicePath \"\"" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.185170 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-acl-logging/0.log" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.185722 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rfdc5_616c8c01-b6e2-4851-9729-888790cbbe63/ovn-controller/0.log" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.186447 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" exitCode=0 Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.186493 4792 generic.go:334] "Generic (PLEG): container finished" podID="616c8c01-b6e2-4851-9729-888790cbbe63" containerID="4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" exitCode=0 Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.186560 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.186656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f"} Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.187121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37"} Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.187155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rfdc5" event={"ID":"616c8c01-b6e2-4851-9729-888790cbbe63","Type":"ContainerDied","Data":"a35635598cbc2064aefc74b1ab85e0ab16ce48e4291a955ab13a2d8b62812e9d"} Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.187182 4792 scope.go:117] "RemoveContainer" containerID="4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.189445 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/2.log" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.193834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.193880 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-env-overrides\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.193915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.193950 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-netns\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.193985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-kubelet\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-etc-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194067 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-node-log\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvkqh\" (UniqueName: \"kubernetes.io/projected/b458d59d-b2ab-435c-adbe-9afff834455d-kube-api-access-mvkqh\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-netd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-script-lib\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194206 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b458d59d-b2ab-435c-adbe-9afff834455d-ovn-node-metrics-cert\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-bin\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-var-lib-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-systemd-units\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-config\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-systemd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-log-socket\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-slash\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194410 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-ovn\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-ovn\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194714 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-netd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194763 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-systemd-units\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-etc-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194804 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-cni-bin\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-node-log\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-var-lib-openvswitch\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194831 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-kubelet\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194878 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.194896 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-run-netns\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.195271 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-run-systemd\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.195386 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-env-overrides\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.195418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-log-socket\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.195442 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b458d59d-b2ab-435c-adbe-9afff834455d-host-slash\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.196031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-config\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.199158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b458d59d-b2ab-435c-adbe-9afff834455d-ovnkube-script-lib\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.203199 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b458d59d-b2ab-435c-adbe-9afff834455d-ovn-node-metrics-cert\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.203592 4792 scope.go:117] "RemoveContainer" containerID="279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.210158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvkqh\" (UniqueName: \"kubernetes.io/projected/b458d59d-b2ab-435c-adbe-9afff834455d-kube-api-access-mvkqh\") pod \"ovnkube-node-mhlc8\" (UID: \"b458d59d-b2ab-435c-adbe-9afff834455d\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.252174 4792 scope.go:117] "RemoveContainer" containerID="5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.262070 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rfdc5"] Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.268093 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rfdc5"] Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.276249 4792 scope.go:117] "RemoveContainer" containerID="c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.297237 4792 scope.go:117] "RemoveContainer" containerID="ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.314846 4792 scope.go:117] "RemoveContainer" containerID="4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.327780 4792 scope.go:117] "RemoveContainer" containerID="3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.344252 4792 scope.go:117] "RemoveContainer" containerID="7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.357383 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.357542 4792 scope.go:117] "RemoveContainer" containerID="7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.381057 4792 scope.go:117] "RemoveContainer" containerID="4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.381580 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382\": container with ID starting with 4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382 not found: ID does not exist" containerID="4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.381640 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382"} err="failed to get container status \"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382\": rpc error: code = NotFound desc = could not find container \"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382\": container with ID starting with 4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.381664 4792 scope.go:117] "RemoveContainer" containerID="279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.381963 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\": container with ID starting with 279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580 not found: ID does not exist" containerID="279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.381987 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580"} err="failed to get container status \"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\": rpc error: code = NotFound desc = could not find container \"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\": container with ID starting with 279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.382002 4792 scope.go:117] "RemoveContainer" containerID="5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.382536 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\": container with ID starting with 5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab not found: ID does not exist" containerID="5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.382582 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab"} err="failed to get container status \"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\": rpc error: code = NotFound desc = could not find container \"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\": container with ID starting with 5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.382626 4792 scope.go:117] "RemoveContainer" containerID="c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.383057 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\": container with ID starting with c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992 not found: ID does not exist" containerID="c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383089 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992"} err="failed to get container status \"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\": rpc error: code = NotFound desc = could not find container \"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\": container with ID starting with c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383113 4792 scope.go:117] "RemoveContainer" containerID="ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.383433 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\": container with ID starting with ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f not found: ID does not exist" containerID="ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383463 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f"} err="failed to get container status \"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\": rpc error: code = NotFound desc = could not find container \"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\": container with ID starting with ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383483 4792 scope.go:117] "RemoveContainer" containerID="4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.383778 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\": container with ID starting with 4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37 not found: ID does not exist" containerID="4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383808 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37"} err="failed to get container status \"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\": rpc error: code = NotFound desc = could not find container \"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\": container with ID starting with 4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.383828 4792 scope.go:117] "RemoveContainer" containerID="3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.384150 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\": container with ID starting with 3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea not found: ID does not exist" containerID="3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.384183 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea"} err="failed to get container status \"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\": rpc error: code = NotFound desc = could not find container \"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\": container with ID starting with 3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.384206 4792 scope.go:117] "RemoveContainer" containerID="7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.384426 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\": container with ID starting with 7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736 not found: ID does not exist" containerID="7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.384453 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736"} err="failed to get container status \"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\": rpc error: code = NotFound desc = could not find container \"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\": container with ID starting with 7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.384470 4792 scope.go:117] "RemoveContainer" containerID="7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0" Feb 16 21:47:21 crc kubenswrapper[4792]: E0216 21:47:21.385427 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\": container with ID starting with 7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0 not found: ID does not exist" containerID="7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.385454 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0"} err="failed to get container status \"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\": rpc error: code = NotFound desc = could not find container \"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\": container with ID starting with 7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.385472 4792 scope.go:117] "RemoveContainer" containerID="4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.385800 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382"} err="failed to get container status \"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382\": rpc error: code = NotFound desc = could not find container \"4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382\": container with ID starting with 4dcf56602894013586eecab569366146cf6489894520186361952dd25205e382 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.385825 4792 scope.go:117] "RemoveContainer" containerID="279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.386392 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580"} err="failed to get container status \"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\": rpc error: code = NotFound desc = could not find container \"279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580\": container with ID starting with 279169c2486f58c9699741e0f93433f714b65364f3563164ed47a2d411cff580 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.386417 4792 scope.go:117] "RemoveContainer" containerID="5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.386869 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab"} err="failed to get container status \"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\": rpc error: code = NotFound desc = could not find container \"5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab\": container with ID starting with 5751cc1c9386a140e9ccd08d68d33917e722a47bce855b6468158fd757c579ab not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.386900 4792 scope.go:117] "RemoveContainer" containerID="c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387138 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992"} err="failed to get container status \"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\": rpc error: code = NotFound desc = could not find container \"c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992\": container with ID starting with c97bb0eb8b54cc31298803022c012716b0147703cd0221e10469280c7bbcf992 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387164 4792 scope.go:117] "RemoveContainer" containerID="ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387366 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f"} err="failed to get container status \"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\": rpc error: code = NotFound desc = could not find container \"ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f\": container with ID starting with ee0d5211fdf1b69bdab88738d8d1b172dda14ecf0d47f72c43f46f8dc7ff8d0f not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387392 4792 scope.go:117] "RemoveContainer" containerID="4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387589 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37"} err="failed to get container status \"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\": rpc error: code = NotFound desc = could not find container \"4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37\": container with ID starting with 4cbda3db7a5be7ca45d1b8cab7a4e18264bdd0a69237ea33624378ebb5542d37 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387632 4792 scope.go:117] "RemoveContainer" containerID="3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387809 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea"} err="failed to get container status \"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\": rpc error: code = NotFound desc = could not find container \"3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea\": container with ID starting with 3834795f6ad31d16f0946a4551245b22de438bd8e41c4f63df2dc874e2c557ea not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387827 4792 scope.go:117] "RemoveContainer" containerID="7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.387991 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736"} err="failed to get container status \"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\": rpc error: code = NotFound desc = could not find container \"7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736\": container with ID starting with 7d2b4fb794bffb47585c977becd39562c03d0ff46e5747e13ec11344ff5e0736 not found: ID does not exist" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.388007 4792 scope.go:117] "RemoveContainer" containerID="7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0" Feb 16 21:47:21 crc kubenswrapper[4792]: I0216 21:47:21.388163 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0"} err="failed to get container status \"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\": rpc error: code = NotFound desc = could not find container \"7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0\": container with ID starting with 7370047b49bff7d38b1995195800df525e197e520b85f31db8512859e18cc5d0 not found: ID does not exist" Feb 16 21:47:22 crc kubenswrapper[4792]: I0216 21:47:22.034256 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616c8c01-b6e2-4851-9729-888790cbbe63" path="/var/lib/kubelet/pods/616c8c01-b6e2-4851-9729-888790cbbe63/volumes" Feb 16 21:47:22 crc kubenswrapper[4792]: I0216 21:47:22.194512 4792 generic.go:334] "Generic (PLEG): container finished" podID="b458d59d-b2ab-435c-adbe-9afff834455d" containerID="b12697e68bffaaf96f83abbfeffad608ed87b0bbe5a7c8bb634d95f2ba2390c0" exitCode=0 Feb 16 21:47:22 crc kubenswrapper[4792]: I0216 21:47:22.194565 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerDied","Data":"b12697e68bffaaf96f83abbfeffad608ed87b0bbe5a7c8bb634d95f2ba2390c0"} Feb 16 21:47:22 crc kubenswrapper[4792]: I0216 21:47:22.194589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"a8159359ea88518f69150deb096b5cf4385bae61d7de7672d7e04be721541d89"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.205022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"4a903ba55263621b7eda84026eab2028e68bf4ae09784f991c438888e5770b14"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.206160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"48398cf7ee34fd88c681308cb51062c2e8610ddae03acef55ef494be134e28cf"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.206250 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"876143eac565a6e775f1ceb0561e7d62990f62821e464c4185e1edbe4814a37c"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.206270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"a23856ab11d4b97e3dcec8d671db5702c70112792f1be5f04682a4654809b2b7"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.206285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"22049f56b6e988e93792cae64e81a37f9f5782bae856347e0fcb245cbf22736f"} Feb 16 21:47:23 crc kubenswrapper[4792]: I0216 21:47:23.206297 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"62301bfa7cae051b95deea53e02c9c6192efeaa5e40492106ea6c50ca565034d"} Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.133565 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-785cg"] Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.134989 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.138035 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.138090 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-5cj8t" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.138570 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.167960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwx4z\" (UniqueName: \"kubernetes.io/projected/cc1404e2-49f6-48df-99fc-24b7b05b5e33-kube-api-access-mwx4z\") pod \"obo-prometheus-operator-68bc856cb9-785cg\" (UID: \"cc1404e2-49f6-48df-99fc-24b7b05b5e33\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.224001 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"029227598db8267b2fcd53db523cac2903cda1ab9a4d2f591f2ca92409a00b80"} Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.257265 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v"] Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.258211 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.260488 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-k2v5c" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.260497 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.268904 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwx4z\" (UniqueName: \"kubernetes.io/projected/cc1404e2-49f6-48df-99fc-24b7b05b5e33-kube-api-access-mwx4z\") pod \"obo-prometheus-operator-68bc856cb9-785cg\" (UID: \"cc1404e2-49f6-48df-99fc-24b7b05b5e33\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.278333 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg"] Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.279208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.295092 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwx4z\" (UniqueName: \"kubernetes.io/projected/cc1404e2-49f6-48df-99fc-24b7b05b5e33-kube-api-access-mwx4z\") pod \"obo-prometheus-operator-68bc856cb9-785cg\" (UID: \"cc1404e2-49f6-48df-99fc-24b7b05b5e33\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.361901 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7sqrb"] Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.362855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.365905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.365963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-49hhg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.370791 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.370828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.370870 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.370915 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.450586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.471722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.472133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszhg\" (UniqueName: \"kubernetes.io/projected/85d29954-608f-4bb5-805e-5ac6d45b6652-kube-api-access-pszhg\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.472191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.472235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85d29954-608f-4bb5-805e-5ac6d45b6652-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.472272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.472299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.478116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.478116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.480004 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e173d96c-280b-4293-ae21-272cce1b11bc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg\" (UID: \"e173d96c-280b-4293-ae21-272cce1b11bc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.482965 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(16ed4156e35715086a698d07da60cb5b4b30b87db0a2829a6ca1e545a4e88a13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.483015 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(16ed4156e35715086a698d07da60cb5b4b30b87db0a2829a6ca1e545a4e88a13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.483036 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(16ed4156e35715086a698d07da60cb5b4b30b87db0a2829a6ca1e545a4e88a13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.483081 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(16ed4156e35715086a698d07da60cb5b4b30b87db0a2829a6ca1e545a4e88a13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" podUID="cc1404e2-49f6-48df-99fc-24b7b05b5e33" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.485460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2899a7e8-f5fa-4879-9df7-ba57ae9f4262-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v\" (UID: \"2899a7e8-f5fa-4879-9df7-ba57ae9f4262\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.558638 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7jr7l"] Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.559315 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.561915 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-v2vrr" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.573304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85d29954-608f-4bb5-805e-5ac6d45b6652-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.573395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pszhg\" (UniqueName: \"kubernetes.io/projected/85d29954-608f-4bb5-805e-5ac6d45b6652-kube-api-access-pszhg\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.573557 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.577139 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85d29954-608f-4bb5-805e-5ac6d45b6652-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.595491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pszhg\" (UniqueName: \"kubernetes.io/projected/85d29954-608f-4bb5-805e-5ac6d45b6652-kube-api-access-pszhg\") pod \"observability-operator-59bdc8b94-7sqrb\" (UID: \"85d29954-608f-4bb5-805e-5ac6d45b6652\") " pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.611086 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(06a69a792054206a5a6970962ee4834bee34fa6428f1bd674dc17ff454a8989d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.611166 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(06a69a792054206a5a6970962ee4834bee34fa6428f1bd674dc17ff454a8989d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.611196 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(06a69a792054206a5a6970962ee4834bee34fa6428f1bd674dc17ff454a8989d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.611248 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(06a69a792054206a5a6970962ee4834bee34fa6428f1bd674dc17ff454a8989d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" podUID="2899a7e8-f5fa-4879-9df7-ba57ae9f4262" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.615096 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.642294 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(855824d7d5723bf7083ed5841d76a729ab51d2b646c5a890382f0e2ee47572ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.642364 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(855824d7d5723bf7083ed5841d76a729ab51d2b646c5a890382f0e2ee47572ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.642390 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(855824d7d5723bf7083ed5841d76a729ab51d2b646c5a890382f0e2ee47572ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.642440 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(855824d7d5723bf7083ed5841d76a729ab51d2b646c5a890382f0e2ee47572ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" podUID="e173d96c-280b-4293-ae21-272cce1b11bc" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.674715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxf5\" (UniqueName: \"kubernetes.io/projected/f912b10c-80d1-4667-b807-45a54e626fbe-kube-api-access-vxxf5\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.674774 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f912b10c-80d1-4667-b807-45a54e626fbe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.676690 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.697192 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(665dac2089bcfa63598de6e799bb44293b98f6788080aefbb2baa6d55d7ca2b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.697260 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(665dac2089bcfa63598de6e799bb44293b98f6788080aefbb2baa6d55d7ca2b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.697283 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(665dac2089bcfa63598de6e799bb44293b98f6788080aefbb2baa6d55d7ca2b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.697342 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(665dac2089bcfa63598de6e799bb44293b98f6788080aefbb2baa6d55d7ca2b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" podUID="85d29954-608f-4bb5-805e-5ac6d45b6652" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.776762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxf5\" (UniqueName: \"kubernetes.io/projected/f912b10c-80d1-4667-b807-45a54e626fbe-kube-api-access-vxxf5\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.776834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f912b10c-80d1-4667-b807-45a54e626fbe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.777757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f912b10c-80d1-4667-b807-45a54e626fbe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.793970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxf5\" (UniqueName: \"kubernetes.io/projected/f912b10c-80d1-4667-b807-45a54e626fbe-kube-api-access-vxxf5\") pod \"perses-operator-5bf474d74f-7jr7l\" (UID: \"f912b10c-80d1-4667-b807-45a54e626fbe\") " pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: I0216 21:47:26.877511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.896435 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(ab592f84d7d101a84f72896a5f330a2beb189bd645aea1722c9393438bb54733): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.896509 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(ab592f84d7d101a84f72896a5f330a2beb189bd645aea1722c9393438bb54733): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.896534 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(ab592f84d7d101a84f72896a5f330a2beb189bd645aea1722c9393438bb54733): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:26 crc kubenswrapper[4792]: E0216 21:47:26.896583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(ab592f84d7d101a84f72896a5f330a2beb189bd645aea1722c9393438bb54733): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" podUID="f912b10c-80d1-4667-b807-45a54e626fbe" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.237228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" event={"ID":"b458d59d-b2ab-435c-adbe-9afff834455d","Type":"ContainerStarted","Data":"cadc2b39f74105deee970cb32811aa142c1a390c4ae2c205d389a26eb7095f04"} Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.237733 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.237750 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.237760 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.265865 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" podStartSLOduration=8.265841795 podStartE2EDuration="8.265841795s" podCreationTimestamp="2026-02-16 21:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:47:28.26385109 +0000 UTC m=+580.917130011" watchObservedRunningTime="2026-02-16 21:47:28.265841795 +0000 UTC m=+580.919120706" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.324918 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:28 crc kubenswrapper[4792]: I0216 21:47:28.352866 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.106355 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7jr7l"] Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.106491 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.106972 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.133816 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(a3f6a8bacfb90812a87ac69cea329f8c7c84ba7daadc66673e2c26cdc9455ecc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.134242 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(a3f6a8bacfb90812a87ac69cea329f8c7c84ba7daadc66673e2c26cdc9455ecc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.134266 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(a3f6a8bacfb90812a87ac69cea329f8c7c84ba7daadc66673e2c26cdc9455ecc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.134310 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(a3f6a8bacfb90812a87ac69cea329f8c7c84ba7daadc66673e2c26cdc9455ecc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" podUID="f912b10c-80d1-4667-b807-45a54e626fbe" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.138226 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg"] Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.138342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.138817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.146863 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7sqrb"] Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.146985 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.147420 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.150222 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v"] Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.150342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.160494 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.167210 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-785cg"] Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.167349 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:29 crc kubenswrapper[4792]: I0216 21:47:29.167807 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.195105 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(1a1d25fc6608ea18389a64653d9c875df00affa01cf1d7f595c2f183e50d8d04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.195181 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(1a1d25fc6608ea18389a64653d9c875df00affa01cf1d7f595c2f183e50d8d04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.195205 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(1a1d25fc6608ea18389a64653d9c875df00affa01cf1d7f595c2f183e50d8d04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.195251 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(1a1d25fc6608ea18389a64653d9c875df00affa01cf1d7f595c2f183e50d8d04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" podUID="e173d96c-280b-4293-ae21-272cce1b11bc" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.206348 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(e977c1c3d0ca86e095f8b30770e8bb83f85f383f9295e7cebfede67b064f4366): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.206418 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(e977c1c3d0ca86e095f8b30770e8bb83f85f383f9295e7cebfede67b064f4366): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.206442 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(e977c1c3d0ca86e095f8b30770e8bb83f85f383f9295e7cebfede67b064f4366): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.206488 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(e977c1c3d0ca86e095f8b30770e8bb83f85f383f9295e7cebfede67b064f4366): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" podUID="85d29954-608f-4bb5-805e-5ac6d45b6652" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.212550 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(b158a38d104e6eaabd351296bde29cd6d00be207c61578761c229c795f41a8a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.212627 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(b158a38d104e6eaabd351296bde29cd6d00be207c61578761c229c795f41a8a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.212651 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(b158a38d104e6eaabd351296bde29cd6d00be207c61578761c229c795f41a8a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.212691 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(b158a38d104e6eaabd351296bde29cd6d00be207c61578761c229c795f41a8a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" podUID="2899a7e8-f5fa-4879-9df7-ba57ae9f4262" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.230426 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(6fa814108f9fcacb4b6e0a114cba8fd12fe92cde35eb2989c5ed373727e5d8f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.230488 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(6fa814108f9fcacb4b6e0a114cba8fd12fe92cde35eb2989c5ed373727e5d8f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.230513 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(6fa814108f9fcacb4b6e0a114cba8fd12fe92cde35eb2989c5ed373727e5d8f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:29 crc kubenswrapper[4792]: E0216 21:47:29.230619 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(6fa814108f9fcacb4b6e0a114cba8fd12fe92cde35eb2989c5ed373727e5d8f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" podUID="cc1404e2-49f6-48df-99fc-24b7b05b5e33" Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.025716 4792 scope.go:117] "RemoveContainer" containerID="664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436" Feb 16 21:47:31 crc kubenswrapper[4792]: E0216 21:47:31.027637 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mp8ql_openshift-multus(3f2095e9-5a78-45fb-a930-eacbd54ec73d)\"" pod="openshift-multus/multus-mp8ql" podUID="3f2095e9-5a78-45fb-a930-eacbd54ec73d" Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.532350 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.532413 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.532461 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.533214 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:47:31 crc kubenswrapper[4792]: I0216 21:47:31.533273 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b" gracePeriod=600 Feb 16 21:47:32 crc kubenswrapper[4792]: I0216 21:47:32.264293 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b" exitCode=0 Feb 16 21:47:32 crc kubenswrapper[4792]: I0216 21:47:32.264335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b"} Feb 16 21:47:32 crc kubenswrapper[4792]: I0216 21:47:32.264686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2"} Feb 16 21:47:32 crc kubenswrapper[4792]: I0216 21:47:32.264709 4792 scope.go:117] "RemoveContainer" containerID="f96d495740eb8729dfbeebadc5c0750e7b51d332aff72a9ef1710e22093f345f" Feb 16 21:47:40 crc kubenswrapper[4792]: I0216 21:47:40.025854 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:40 crc kubenswrapper[4792]: I0216 21:47:40.026735 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:40 crc kubenswrapper[4792]: E0216 21:47:40.063716 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(9004c4d790a100d2f652b5b281feaa8eef70156c26fbbe76f156aed54d1dafa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:40 crc kubenswrapper[4792]: E0216 21:47:40.063786 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(9004c4d790a100d2f652b5b281feaa8eef70156c26fbbe76f156aed54d1dafa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:40 crc kubenswrapper[4792]: E0216 21:47:40.063806 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(9004c4d790a100d2f652b5b281feaa8eef70156c26fbbe76f156aed54d1dafa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:40 crc kubenswrapper[4792]: E0216 21:47:40.063850 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-785cg_openshift-operators(cc1404e2-49f6-48df-99fc-24b7b05b5e33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-785cg_openshift-operators_cc1404e2-49f6-48df-99fc-24b7b05b5e33_0(9004c4d790a100d2f652b5b281feaa8eef70156c26fbbe76f156aed54d1dafa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" podUID="cc1404e2-49f6-48df-99fc-24b7b05b5e33" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.025858 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.025926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.025931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.026793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.027070 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:42 crc kubenswrapper[4792]: I0216 21:47:42.027129 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.068580 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(737bc7481c0465b81d53a71b6dc1a6ffc4201225d69bb1b3f6db8b3b92eff18c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.068713 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(737bc7481c0465b81d53a71b6dc1a6ffc4201225d69bb1b3f6db8b3b92eff18c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.068758 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(737bc7481c0465b81d53a71b6dc1a6ffc4201225d69bb1b3f6db8b3b92eff18c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.068839 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators(2899a7e8-f5fa-4879-9df7-ba57ae9f4262)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_openshift-operators_2899a7e8-f5fa-4879-9df7-ba57ae9f4262_0(737bc7481c0465b81d53a71b6dc1a6ffc4201225d69bb1b3f6db8b3b92eff18c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" podUID="2899a7e8-f5fa-4879-9df7-ba57ae9f4262" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.092914 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(4e18b6128b479762db73a0b3cfc43286c6dc8f93dfa9e059f60f8ffd53c83ad6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.092989 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(4e18b6128b479762db73a0b3cfc43286c6dc8f93dfa9e059f60f8ffd53c83ad6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.093018 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(4e18b6128b479762db73a0b3cfc43286c6dc8f93dfa9e059f60f8ffd53c83ad6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.093097 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7jr7l_openshift-operators(f912b10c-80d1-4667-b807-45a54e626fbe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7jr7l_openshift-operators_f912b10c-80d1-4667-b807-45a54e626fbe_0(4e18b6128b479762db73a0b3cfc43286c6dc8f93dfa9e059f60f8ffd53c83ad6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" podUID="f912b10c-80d1-4667-b807-45a54e626fbe" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.103889 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(357e89fa732421c694f36b41923e82299d73d5defd138ae9517d5d1f95c790f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.103962 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(357e89fa732421c694f36b41923e82299d73d5defd138ae9517d5d1f95c790f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.103993 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(357e89fa732421c694f36b41923e82299d73d5defd138ae9517d5d1f95c790f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:42 crc kubenswrapper[4792]: E0216 21:47:42.104058 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators(e173d96c-280b-4293-ae21-272cce1b11bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_openshift-operators_e173d96c-280b-4293-ae21-272cce1b11bc_0(357e89fa732421c694f36b41923e82299d73d5defd138ae9517d5d1f95c790f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" podUID="e173d96c-280b-4293-ae21-272cce1b11bc" Feb 16 21:47:44 crc kubenswrapper[4792]: I0216 21:47:44.025659 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:44 crc kubenswrapper[4792]: I0216 21:47:44.026447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:44 crc kubenswrapper[4792]: E0216 21:47:44.050069 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(a2fced8a9a6c502f63367fefa20b1083339384c54a8186e689cf71b6e8c951a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 21:47:44 crc kubenswrapper[4792]: E0216 21:47:44.050129 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(a2fced8a9a6c502f63367fefa20b1083339384c54a8186e689cf71b6e8c951a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:44 crc kubenswrapper[4792]: E0216 21:47:44.050151 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(a2fced8a9a6c502f63367fefa20b1083339384c54a8186e689cf71b6e8c951a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:44 crc kubenswrapper[4792]: E0216 21:47:44.050188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7sqrb_openshift-operators(85d29954-608f-4bb5-805e-5ac6d45b6652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7sqrb_openshift-operators_85d29954-608f-4bb5-805e-5ac6d45b6652_0(a2fced8a9a6c502f63367fefa20b1083339384c54a8186e689cf71b6e8c951a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" podUID="85d29954-608f-4bb5-805e-5ac6d45b6652" Feb 16 21:47:46 crc kubenswrapper[4792]: I0216 21:47:46.026789 4792 scope.go:117] "RemoveContainer" containerID="664aef9db56bbd1912357051ec864649ae3110909b6394c8e4772f7ce2c6d436" Feb 16 21:47:46 crc kubenswrapper[4792]: I0216 21:47:46.342507 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mp8ql_3f2095e9-5a78-45fb-a930-eacbd54ec73d/kube-multus/2.log" Feb 16 21:47:46 crc kubenswrapper[4792]: I0216 21:47:46.343182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mp8ql" event={"ID":"3f2095e9-5a78-45fb-a930-eacbd54ec73d","Type":"ContainerStarted","Data":"ca6bf1f00db3b7c3504a46b68ecf398b26c7188060afbb71586f360eb2407fd3"} Feb 16 21:47:51 crc kubenswrapper[4792]: I0216 21:47:51.385978 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhlc8" Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.026009 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.026019 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.026443 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.027131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.337788 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-785cg"] Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.385820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" event={"ID":"cc1404e2-49f6-48df-99fc-24b7b05b5e33","Type":"ContainerStarted","Data":"0c50a07de0dc75075201911f984d897dfca879435ee0bcca1e2c1ea0f8664999"} Feb 16 21:47:53 crc kubenswrapper[4792]: I0216 21:47:53.489383 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v"] Feb 16 21:47:53 crc kubenswrapper[4792]: W0216 21:47:53.496428 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2899a7e8_f5fa_4879_9df7_ba57ae9f4262.slice/crio-433c9fdd5987c018739ed138a49638a60f019a9d95db625792fb94d4a967257b WatchSource:0}: Error finding container 433c9fdd5987c018739ed138a49638a60f019a9d95db625792fb94d4a967257b: Status 404 returned error can't find the container with id 433c9fdd5987c018739ed138a49638a60f019a9d95db625792fb94d4a967257b Feb 16 21:47:54 crc kubenswrapper[4792]: I0216 21:47:54.392291 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" event={"ID":"2899a7e8-f5fa-4879-9df7-ba57ae9f4262","Type":"ContainerStarted","Data":"433c9fdd5987c018739ed138a49638a60f019a9d95db625792fb94d4a967257b"} Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.026211 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.026278 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.027191 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.027204 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.577851 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7sqrb"] Feb 16 21:47:55 crc kubenswrapper[4792]: W0216 21:47:55.590115 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d29954_608f_4bb5_805e_5ac6d45b6652.slice/crio-6b0a60bd3c64cfcfcf95df5b172403ccd5aaaa021a4068562c8d08e82257d080 WatchSource:0}: Error finding container 6b0a60bd3c64cfcfcf95df5b172403ccd5aaaa021a4068562c8d08e82257d080: Status 404 returned error can't find the container with id 6b0a60bd3c64cfcfcf95df5b172403ccd5aaaa021a4068562c8d08e82257d080 Feb 16 21:47:55 crc kubenswrapper[4792]: I0216 21:47:55.629293 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg"] Feb 16 21:47:55 crc kubenswrapper[4792]: W0216 21:47:55.630476 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode173d96c_280b_4293_ae21_272cce1b11bc.slice/crio-7234a43fb88117ce38706c4a57667293b91e9de007fc8c4069289d429e656ff9 WatchSource:0}: Error finding container 7234a43fb88117ce38706c4a57667293b91e9de007fc8c4069289d429e656ff9: Status 404 returned error can't find the container with id 7234a43fb88117ce38706c4a57667293b91e9de007fc8c4069289d429e656ff9 Feb 16 21:47:56 crc kubenswrapper[4792]: I0216 21:47:56.414725 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" event={"ID":"85d29954-608f-4bb5-805e-5ac6d45b6652","Type":"ContainerStarted","Data":"6b0a60bd3c64cfcfcf95df5b172403ccd5aaaa021a4068562c8d08e82257d080"} Feb 16 21:47:56 crc kubenswrapper[4792]: I0216 21:47:56.416491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" event={"ID":"e173d96c-280b-4293-ae21-272cce1b11bc","Type":"ContainerStarted","Data":"7234a43fb88117ce38706c4a57667293b91e9de007fc8c4069289d429e656ff9"} Feb 16 21:47:57 crc kubenswrapper[4792]: I0216 21:47:57.027318 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:57 crc kubenswrapper[4792]: I0216 21:47:57.027963 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.190984 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7jr7l"] Feb 16 21:47:59 crc kubenswrapper[4792]: W0216 21:47:59.218675 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf912b10c_80d1_4667_b807_45a54e626fbe.slice/crio-a5e3bbc2367c6469196f8f6ec1b92825f9f2ca835cc80c009750528c30ade971 WatchSource:0}: Error finding container a5e3bbc2367c6469196f8f6ec1b92825f9f2ca835cc80c009750528c30ade971: Status 404 returned error can't find the container with id a5e3bbc2367c6469196f8f6ec1b92825f9f2ca835cc80c009750528c30ade971 Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.441948 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" event={"ID":"f912b10c-80d1-4667-b807-45a54e626fbe","Type":"ContainerStarted","Data":"a5e3bbc2367c6469196f8f6ec1b92825f9f2ca835cc80c009750528c30ade971"} Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.443524 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" event={"ID":"2899a7e8-f5fa-4879-9df7-ba57ae9f4262","Type":"ContainerStarted","Data":"5236bf15073ea606c21a9b8c684d28e66d06ef1160c5ecbf74d8a5f253092074"} Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.447175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" event={"ID":"cc1404e2-49f6-48df-99fc-24b7b05b5e33","Type":"ContainerStarted","Data":"f23e4eeb357956fba4a680828bb643be93ec09bedf63773a43623f119fdb68fe"} Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.449525 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" event={"ID":"e173d96c-280b-4293-ae21-272cce1b11bc","Type":"ContainerStarted","Data":"2bcebc0513e900f84f0175fe9c7435879b1f080498b7bf64f9588f8ac6ba9730"} Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.476739 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v" podStartSLOduration=28.164681741 podStartE2EDuration="33.476719512s" podCreationTimestamp="2026-02-16 21:47:26 +0000 UTC" firstStartedPulling="2026-02-16 21:47:53.500662443 +0000 UTC m=+606.153941334" lastFinishedPulling="2026-02-16 21:47:58.812700214 +0000 UTC m=+611.465979105" observedRunningTime="2026-02-16 21:47:59.463533378 +0000 UTC m=+612.116812289" watchObservedRunningTime="2026-02-16 21:47:59.476719512 +0000 UTC m=+612.129998403" Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.502833 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg" podStartSLOduration=30.354419838 podStartE2EDuration="33.502778883s" podCreationTimestamp="2026-02-16 21:47:26 +0000 UTC" firstStartedPulling="2026-02-16 21:47:55.633207129 +0000 UTC m=+608.286486020" lastFinishedPulling="2026-02-16 21:47:58.781566174 +0000 UTC m=+611.434845065" observedRunningTime="2026-02-16 21:47:59.48784973 +0000 UTC m=+612.141128661" watchObservedRunningTime="2026-02-16 21:47:59.502778883 +0000 UTC m=+612.156057804" Feb 16 21:47:59 crc kubenswrapper[4792]: I0216 21:47:59.518172 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-785cg" podStartSLOduration=28.082282365 podStartE2EDuration="33.518150137s" podCreationTimestamp="2026-02-16 21:47:26 +0000 UTC" firstStartedPulling="2026-02-16 21:47:53.349188158 +0000 UTC m=+606.002467049" lastFinishedPulling="2026-02-16 21:47:58.78505593 +0000 UTC m=+611.438334821" observedRunningTime="2026-02-16 21:47:59.50774247 +0000 UTC m=+612.161021401" watchObservedRunningTime="2026-02-16 21:47:59.518150137 +0000 UTC m=+612.171429028" Feb 16 21:48:02 crc kubenswrapper[4792]: I0216 21:48:02.469777 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" event={"ID":"85d29954-608f-4bb5-805e-5ac6d45b6652","Type":"ContainerStarted","Data":"0d5a0b9a2ece705d87e092fdc04567726c354510890d9d2702f92e819c1dda6f"} Feb 16 21:48:02 crc kubenswrapper[4792]: I0216 21:48:02.470679 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:48:02 crc kubenswrapper[4792]: I0216 21:48:02.471123 4792 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7sqrb container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Feb 16 21:48:02 crc kubenswrapper[4792]: I0216 21:48:02.471179 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" podUID="85d29954-608f-4bb5-805e-5ac6d45b6652" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Feb 16 21:48:02 crc kubenswrapper[4792]: I0216 21:48:02.491942 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" podStartSLOduration=29.996195091 podStartE2EDuration="36.491922438s" podCreationTimestamp="2026-02-16 21:47:26 +0000 UTC" firstStartedPulling="2026-02-16 21:47:55.597705789 +0000 UTC m=+608.250984680" lastFinishedPulling="2026-02-16 21:48:02.093433136 +0000 UTC m=+614.746712027" observedRunningTime="2026-02-16 21:48:02.49092018 +0000 UTC m=+615.144199081" watchObservedRunningTime="2026-02-16 21:48:02.491922438 +0000 UTC m=+615.145201329" Feb 16 21:48:03 crc kubenswrapper[4792]: I0216 21:48:03.477421 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7sqrb" Feb 16 21:48:04 crc kubenswrapper[4792]: I0216 21:48:04.483418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" event={"ID":"f912b10c-80d1-4667-b807-45a54e626fbe","Type":"ContainerStarted","Data":"658f7f9bab794c31f3b0d7e1db69705a822e14d92158d418864ec7645c2b35c0"} Feb 16 21:48:04 crc kubenswrapper[4792]: I0216 21:48:04.503104 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" podStartSLOduration=33.770006045 podStartE2EDuration="38.503085689s" podCreationTimestamp="2026-02-16 21:47:26 +0000 UTC" firstStartedPulling="2026-02-16 21:47:59.224287757 +0000 UTC m=+611.877566638" lastFinishedPulling="2026-02-16 21:48:03.957367391 +0000 UTC m=+616.610646282" observedRunningTime="2026-02-16 21:48:04.49841126 +0000 UTC m=+617.151690161" watchObservedRunningTime="2026-02-16 21:48:04.503085689 +0000 UTC m=+617.156364580" Feb 16 21:48:05 crc kubenswrapper[4792]: I0216 21:48:05.500307 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.980094 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z"] Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.981557 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.984905 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.984967 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dkrsj" Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.986855 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.988945 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z"] Feb 16 21:48:11 crc kubenswrapper[4792]: I0216 21:48:11.995821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rlj\" (UniqueName: \"kubernetes.io/projected/99532456-78ab-4fbd-8aec-6211c50318c2-kube-api-access-x9rlj\") pod \"cert-manager-cainjector-cf98fcc89-n7j6z\" (UID: \"99532456-78ab-4fbd-8aec-6211c50318c2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.012078 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qdhtx"] Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.017237 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qdhtx" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.039390 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2h96l" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.070734 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qdhtx"] Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.082831 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z4dw4"] Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.096670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4p7t\" (UniqueName: \"kubernetes.io/projected/7507a7a6-6084-469d-a099-a8261994754f-kube-api-access-n4p7t\") pod \"cert-manager-858654f9db-qdhtx\" (UID: \"7507a7a6-6084-469d-a099-a8261994754f\") " pod="cert-manager/cert-manager-858654f9db-qdhtx" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.098171 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rlj\" (UniqueName: \"kubernetes.io/projected/99532456-78ab-4fbd-8aec-6211c50318c2-kube-api-access-x9rlj\") pod \"cert-manager-cainjector-cf98fcc89-n7j6z\" (UID: \"99532456-78ab-4fbd-8aec-6211c50318c2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.100164 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.105780 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-llwrt" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.108077 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z4dw4"] Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.125777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rlj\" (UniqueName: \"kubernetes.io/projected/99532456-78ab-4fbd-8aec-6211c50318c2-kube-api-access-x9rlj\") pod \"cert-manager-cainjector-cf98fcc89-n7j6z\" (UID: \"99532456-78ab-4fbd-8aec-6211c50318c2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.199831 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqzx\" (UniqueName: \"kubernetes.io/projected/4f130ece-d511-4abe-8198-8629164ab661-kube-api-access-dwqzx\") pod \"cert-manager-webhook-687f57d79b-z4dw4\" (UID: \"4f130ece-d511-4abe-8198-8629164ab661\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.200202 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4p7t\" (UniqueName: \"kubernetes.io/projected/7507a7a6-6084-469d-a099-a8261994754f-kube-api-access-n4p7t\") pod \"cert-manager-858654f9db-qdhtx\" (UID: \"7507a7a6-6084-469d-a099-a8261994754f\") " pod="cert-manager/cert-manager-858654f9db-qdhtx" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.216063 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4p7t\" (UniqueName: \"kubernetes.io/projected/7507a7a6-6084-469d-a099-a8261994754f-kube-api-access-n4p7t\") pod \"cert-manager-858654f9db-qdhtx\" (UID: \"7507a7a6-6084-469d-a099-a8261994754f\") " pod="cert-manager/cert-manager-858654f9db-qdhtx" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.301242 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwqzx\" (UniqueName: \"kubernetes.io/projected/4f130ece-d511-4abe-8198-8629164ab661-kube-api-access-dwqzx\") pod \"cert-manager-webhook-687f57d79b-z4dw4\" (UID: \"4f130ece-d511-4abe-8198-8629164ab661\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.315394 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwqzx\" (UniqueName: \"kubernetes.io/projected/4f130ece-d511-4abe-8198-8629164ab661-kube-api-access-dwqzx\") pod \"cert-manager-webhook-687f57d79b-z4dw4\" (UID: \"4f130ece-d511-4abe-8198-8629164ab661\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.320841 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.372272 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qdhtx" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.448156 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.531557 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z"] Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.584899 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qdhtx"] Feb 16 21:48:12 crc kubenswrapper[4792]: W0216 21:48:12.588206 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7507a7a6_6084_469d_a099_a8261994754f.slice/crio-88811e54316954132a35acdeaf5a3c396acd27c60329f178a2b3910b20a2d926 WatchSource:0}: Error finding container 88811e54316954132a35acdeaf5a3c396acd27c60329f178a2b3910b20a2d926: Status 404 returned error can't find the container with id 88811e54316954132a35acdeaf5a3c396acd27c60329f178a2b3910b20a2d926 Feb 16 21:48:12 crc kubenswrapper[4792]: W0216 21:48:12.668038 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f130ece_d511_4abe_8198_8629164ab661.slice/crio-816d792dfab9cd41f3ecfaec472c7258be4e5606c5fd65baf8e6be6d42cddd0d WatchSource:0}: Error finding container 816d792dfab9cd41f3ecfaec472c7258be4e5606c5fd65baf8e6be6d42cddd0d: Status 404 returned error can't find the container with id 816d792dfab9cd41f3ecfaec472c7258be4e5606c5fd65baf8e6be6d42cddd0d Feb 16 21:48:12 crc kubenswrapper[4792]: I0216 21:48:12.669453 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z4dw4"] Feb 16 21:48:13 crc kubenswrapper[4792]: I0216 21:48:13.552431 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" event={"ID":"4f130ece-d511-4abe-8198-8629164ab661","Type":"ContainerStarted","Data":"816d792dfab9cd41f3ecfaec472c7258be4e5606c5fd65baf8e6be6d42cddd0d"} Feb 16 21:48:13 crc kubenswrapper[4792]: I0216 21:48:13.553531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" event={"ID":"99532456-78ab-4fbd-8aec-6211c50318c2","Type":"ContainerStarted","Data":"71d55959c0fbf202982168431c065a4b5fc4a18dfe7d2944b6b9f60de9aafd75"} Feb 16 21:48:13 crc kubenswrapper[4792]: I0216 21:48:13.554474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qdhtx" event={"ID":"7507a7a6-6084-469d-a099-a8261994754f","Type":"ContainerStarted","Data":"88811e54316954132a35acdeaf5a3c396acd27c60329f178a2b3910b20a2d926"} Feb 16 21:48:16 crc kubenswrapper[4792]: I0216 21:48:16.881581 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7jr7l" Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.583132 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" event={"ID":"4f130ece-d511-4abe-8198-8629164ab661","Type":"ContainerStarted","Data":"522ac6ca24ecc1f30757b79c0e82c0dbf661134cabb67d5d5e033fd26b33ca97"} Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.583559 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.584574 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" event={"ID":"99532456-78ab-4fbd-8aec-6211c50318c2","Type":"ContainerStarted","Data":"07d9f1e555cfce3b67a4cba3998e712093bd4f8b75a3d90dc68896abc4ca8440"} Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.586161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qdhtx" event={"ID":"7507a7a6-6084-469d-a099-a8261994754f","Type":"ContainerStarted","Data":"f4ec3db258dca7da1a34f4b1ef05ca41089acc103d623c2403d42fa0074d7ebd"} Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.599240 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" podStartSLOduration=1.947859036 podStartE2EDuration="5.599222268s" podCreationTimestamp="2026-02-16 21:48:12 +0000 UTC" firstStartedPulling="2026-02-16 21:48:12.670124359 +0000 UTC m=+625.323403250" lastFinishedPulling="2026-02-16 21:48:16.321487591 +0000 UTC m=+628.974766482" observedRunningTime="2026-02-16 21:48:17.596982085 +0000 UTC m=+630.250260976" watchObservedRunningTime="2026-02-16 21:48:17.599222268 +0000 UTC m=+630.252501159" Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.621698 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qdhtx" podStartSLOduration=2.8820789270000002 podStartE2EDuration="6.621681528s" podCreationTimestamp="2026-02-16 21:48:11 +0000 UTC" firstStartedPulling="2026-02-16 21:48:12.590875999 +0000 UTC m=+625.244154890" lastFinishedPulling="2026-02-16 21:48:16.33047861 +0000 UTC m=+628.983757491" observedRunningTime="2026-02-16 21:48:17.617632136 +0000 UTC m=+630.270911047" watchObservedRunningTime="2026-02-16 21:48:17.621681528 +0000 UTC m=+630.274960419" Feb 16 21:48:17 crc kubenswrapper[4792]: I0216 21:48:17.637941 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n7j6z" podStartSLOduration=2.843334776 podStartE2EDuration="6.637919906s" podCreationTimestamp="2026-02-16 21:48:11 +0000 UTC" firstStartedPulling="2026-02-16 21:48:12.549143546 +0000 UTC m=+625.202422437" lastFinishedPulling="2026-02-16 21:48:16.343728676 +0000 UTC m=+628.997007567" observedRunningTime="2026-02-16 21:48:17.63298707 +0000 UTC m=+630.286265961" watchObservedRunningTime="2026-02-16 21:48:17.637919906 +0000 UTC m=+630.291198797" Feb 16 21:48:22 crc kubenswrapper[4792]: I0216 21:48:22.451246 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-z4dw4" Feb 16 21:48:43 crc kubenswrapper[4792]: I0216 21:48:43.911143 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd"] Feb 16 21:48:43 crc kubenswrapper[4792]: I0216 21:48:43.912976 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:43 crc kubenswrapper[4792]: I0216 21:48:43.915096 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:48:43 crc kubenswrapper[4792]: I0216 21:48:43.935104 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd"] Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.018056 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.018139 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.018196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9smg6\" (UniqueName: \"kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.105524 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m"] Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.107283 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.119590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9smg6\" (UniqueName: \"kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.119783 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.119947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.120513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.121689 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.134745 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m"] Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.148691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9smg6\" (UniqueName: \"kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.221029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.221286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.221368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghv29\" (UniqueName: \"kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.240004 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.322851 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.322922 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghv29\" (UniqueName: \"kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.322991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.323640 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.323768 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.367382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghv29\" (UniqueName: \"kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.426884 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.674666 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m"] Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.801238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd"] Feb 16 21:48:44 crc kubenswrapper[4792]: I0216 21:48:44.806682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" event={"ID":"a4af572d-5db9-4583-b0be-58556116679c","Type":"ContainerStarted","Data":"161a6d2ec1af5f0281cdddf4877879e6fd79b25ccc1fcff17d1a1224d08ef6ec"} Feb 16 21:48:45 crc kubenswrapper[4792]: I0216 21:48:45.819147 4792 generic.go:334] "Generic (PLEG): container finished" podID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerID="5d83c9b09cf411d67bb24c50cf0f6ee3dc48043b895cc98048a96efda39c0727" exitCode=0 Feb 16 21:48:45 crc kubenswrapper[4792]: I0216 21:48:45.819262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" event={"ID":"b0512fe0-f5a1-4558-a562-30ad7a59856c","Type":"ContainerDied","Data":"5d83c9b09cf411d67bb24c50cf0f6ee3dc48043b895cc98048a96efda39c0727"} Feb 16 21:48:45 crc kubenswrapper[4792]: I0216 21:48:45.819754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" event={"ID":"b0512fe0-f5a1-4558-a562-30ad7a59856c","Type":"ContainerStarted","Data":"c7331c0baac143c8fb06ef9a1fc7c7819b5b6a06d1c4137e3c46e7ecdc22860f"} Feb 16 21:48:45 crc kubenswrapper[4792]: I0216 21:48:45.824182 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4af572d-5db9-4583-b0be-58556116679c" containerID="e2025e40be37272ad2f4446196d57dfcb27ee4eb39f362e63d226c0216740844" exitCode=0 Feb 16 21:48:45 crc kubenswrapper[4792]: I0216 21:48:45.824265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" event={"ID":"a4af572d-5db9-4583-b0be-58556116679c","Type":"ContainerDied","Data":"e2025e40be37272ad2f4446196d57dfcb27ee4eb39f362e63d226c0216740844"} Feb 16 21:48:47 crc kubenswrapper[4792]: I0216 21:48:47.838495 4792 generic.go:334] "Generic (PLEG): container finished" podID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerID="45189bec7fc41246eb8901e8d155dd5785b9eac7e86e2665162aeebc38da8590" exitCode=0 Feb 16 21:48:47 crc kubenswrapper[4792]: I0216 21:48:47.838563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" event={"ID":"b0512fe0-f5a1-4558-a562-30ad7a59856c","Type":"ContainerDied","Data":"45189bec7fc41246eb8901e8d155dd5785b9eac7e86e2665162aeebc38da8590"} Feb 16 21:48:47 crc kubenswrapper[4792]: I0216 21:48:47.842267 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4af572d-5db9-4583-b0be-58556116679c" containerID="93c1ffb3832dc822076c7ece9fb68a89bc1bb4780eb751176fa0bb687767ed3f" exitCode=0 Feb 16 21:48:47 crc kubenswrapper[4792]: I0216 21:48:47.842296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" event={"ID":"a4af572d-5db9-4583-b0be-58556116679c","Type":"ContainerDied","Data":"93c1ffb3832dc822076c7ece9fb68a89bc1bb4780eb751176fa0bb687767ed3f"} Feb 16 21:48:48 crc kubenswrapper[4792]: I0216 21:48:48.853524 4792 generic.go:334] "Generic (PLEG): container finished" podID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerID="2576a96d855cc967a1eede4be5720b5dfbeb2a210e2d929fd730038410192892" exitCode=0 Feb 16 21:48:48 crc kubenswrapper[4792]: I0216 21:48:48.853665 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" event={"ID":"b0512fe0-f5a1-4558-a562-30ad7a59856c","Type":"ContainerDied","Data":"2576a96d855cc967a1eede4be5720b5dfbeb2a210e2d929fd730038410192892"} Feb 16 21:48:48 crc kubenswrapper[4792]: I0216 21:48:48.858321 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4af572d-5db9-4583-b0be-58556116679c" containerID="5c227a28d953e997fd1d00e3379e588d4433df7aabd36d51078e8fca65cb1aa3" exitCode=0 Feb 16 21:48:48 crc kubenswrapper[4792]: I0216 21:48:48.858396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" event={"ID":"a4af572d-5db9-4583-b0be-58556116679c","Type":"ContainerDied","Data":"5c227a28d953e997fd1d00e3379e588d4433df7aabd36d51078e8fca65cb1aa3"} Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.187155 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.216743 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9smg6\" (UniqueName: \"kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6\") pod \"b0512fe0-f5a1-4558-a562-30ad7a59856c\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.216833 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util\") pod \"b0512fe0-f5a1-4558-a562-30ad7a59856c\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.216923 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle\") pod \"b0512fe0-f5a1-4558-a562-30ad7a59856c\" (UID: \"b0512fe0-f5a1-4558-a562-30ad7a59856c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.218066 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle" (OuterVolumeSpecName: "bundle") pod "b0512fe0-f5a1-4558-a562-30ad7a59856c" (UID: "b0512fe0-f5a1-4558-a562-30ad7a59856c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.227944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6" (OuterVolumeSpecName: "kube-api-access-9smg6") pod "b0512fe0-f5a1-4558-a562-30ad7a59856c" (UID: "b0512fe0-f5a1-4558-a562-30ad7a59856c"). InnerVolumeSpecName "kube-api-access-9smg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.231585 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util" (OuterVolumeSpecName: "util") pod "b0512fe0-f5a1-4558-a562-30ad7a59856c" (UID: "b0512fe0-f5a1-4558-a562-30ad7a59856c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.271811 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.317999 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghv29\" (UniqueName: \"kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29\") pod \"a4af572d-5db9-4583-b0be-58556116679c\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.318072 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle\") pod \"a4af572d-5db9-4583-b0be-58556116679c\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.318224 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util\") pod \"a4af572d-5db9-4583-b0be-58556116679c\" (UID: \"a4af572d-5db9-4583-b0be-58556116679c\") " Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.318518 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.318540 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9smg6\" (UniqueName: \"kubernetes.io/projected/b0512fe0-f5a1-4558-a562-30ad7a59856c-kube-api-access-9smg6\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.318552 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0512fe0-f5a1-4558-a562-30ad7a59856c-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.319091 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle" (OuterVolumeSpecName: "bundle") pod "a4af572d-5db9-4583-b0be-58556116679c" (UID: "a4af572d-5db9-4583-b0be-58556116679c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.321323 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29" (OuterVolumeSpecName: "kube-api-access-ghv29") pod "a4af572d-5db9-4583-b0be-58556116679c" (UID: "a4af572d-5db9-4583-b0be-58556116679c"). InnerVolumeSpecName "kube-api-access-ghv29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.338522 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util" (OuterVolumeSpecName: "util") pod "a4af572d-5db9-4583-b0be-58556116679c" (UID: "a4af572d-5db9-4583-b0be-58556116679c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.420340 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.420392 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghv29\" (UniqueName: \"kubernetes.io/projected/a4af572d-5db9-4583-b0be-58556116679c-kube-api-access-ghv29\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.420412 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4af572d-5db9-4583-b0be-58556116679c-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.878686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" event={"ID":"a4af572d-5db9-4583-b0be-58556116679c","Type":"ContainerDied","Data":"161a6d2ec1af5f0281cdddf4877879e6fd79b25ccc1fcff17d1a1224d08ef6ec"} Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.879123 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161a6d2ec1af5f0281cdddf4877879e6fd79b25ccc1fcff17d1a1224d08ef6ec" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.878705 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.881381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" event={"ID":"b0512fe0-f5a1-4558-a562-30ad7a59856c","Type":"ContainerDied","Data":"c7331c0baac143c8fb06ef9a1fc7c7819b5b6a06d1c4137e3c46e7ecdc22860f"} Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.881470 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7331c0baac143c8fb06ef9a1fc7c7819b5b6a06d1c4137e3c46e7ecdc22860f" Feb 16 21:48:50 crc kubenswrapper[4792]: I0216 21:48:50.881505 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.455161 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p"] Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457145 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="util" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457247 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="util" Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457303 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="pull" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457352 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="pull" Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457418 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457468 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457525 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457579 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457676 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="util" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457733 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="util" Feb 16 21:49:01 crc kubenswrapper[4792]: E0216 21:49:01.457787 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="pull" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457835 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="pull" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.457994 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4af572d-5db9-4583-b0be-58556116679c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.458064 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0512fe0-f5a1-4558-a562-30ad7a59856c" containerName="extract" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.458766 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.462628 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.462855 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-b2v27" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.463051 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.463110 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.463211 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.463353 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.485400 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p"] Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.589949 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-webhook-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.590233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2d0a7d0-53d6-4031-894c-734f67974527-manager-config\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.590371 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44x9\" (UniqueName: \"kubernetes.io/projected/e2d0a7d0-53d6-4031-894c-734f67974527-kube-api-access-q44x9\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.590451 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.590531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-apiservice-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.692877 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44x9\" (UniqueName: \"kubernetes.io/projected/e2d0a7d0-53d6-4031-894c-734f67974527-kube-api-access-q44x9\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.692938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.692972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-apiservice-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.693038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-webhook-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.693102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2d0a7d0-53d6-4031-894c-734f67974527-manager-config\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.694328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e2d0a7d0-53d6-4031-894c-734f67974527-manager-config\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.699276 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-webhook-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.701295 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-apiservice-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.701709 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e2d0a7d0-53d6-4031-894c-734f67974527-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.712314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44x9\" (UniqueName: \"kubernetes.io/projected/e2d0a7d0-53d6-4031-894c-734f67974527-kube-api-access-q44x9\") pod \"loki-operator-controller-manager-6c9d97fb5-j4f5p\" (UID: \"e2d0a7d0-53d6-4031-894c-734f67974527\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:01 crc kubenswrapper[4792]: I0216 21:49:01.775531 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:02 crc kubenswrapper[4792]: I0216 21:49:02.014091 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p"] Feb 16 21:49:02 crc kubenswrapper[4792]: I0216 21:49:02.983902 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" event={"ID":"e2d0a7d0-53d6-4031-894c-734f67974527","Type":"ContainerStarted","Data":"f39f65f080a56db4b559f50141347ca5c76297b903d12df94e79340b814188b6"} Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.744630 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-7bglt"] Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.747647 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.749568 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-7xlvx" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.750674 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.750800 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.758068 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-7bglt"] Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.848690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gthq\" (UniqueName: \"kubernetes.io/projected/e8d6dc28-8ec7-4d64-9868-673d3ea42873-kube-api-access-2gthq\") pod \"cluster-logging-operator-c769fd969-7bglt\" (UID: \"e8d6dc28-8ec7-4d64-9868-673d3ea42873\") " pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.950207 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gthq\" (UniqueName: \"kubernetes.io/projected/e8d6dc28-8ec7-4d64-9868-673d3ea42873-kube-api-access-2gthq\") pod \"cluster-logging-operator-c769fd969-7bglt\" (UID: \"e8d6dc28-8ec7-4d64-9868-673d3ea42873\") " pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" Feb 16 21:49:04 crc kubenswrapper[4792]: I0216 21:49:04.982332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gthq\" (UniqueName: \"kubernetes.io/projected/e8d6dc28-8ec7-4d64-9868-673d3ea42873-kube-api-access-2gthq\") pod \"cluster-logging-operator-c769fd969-7bglt\" (UID: \"e8d6dc28-8ec7-4d64-9868-673d3ea42873\") " pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" Feb 16 21:49:05 crc kubenswrapper[4792]: I0216 21:49:05.071930 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" Feb 16 21:49:07 crc kubenswrapper[4792]: I0216 21:49:07.018180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" event={"ID":"e2d0a7d0-53d6-4031-894c-734f67974527","Type":"ContainerStarted","Data":"87adaf427755d524061e5e721839c21a48b6663dc343d0e0fc6f76c9bb30b4f3"} Feb 16 21:49:07 crc kubenswrapper[4792]: I0216 21:49:07.149078 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-7bglt"] Feb 16 21:49:07 crc kubenswrapper[4792]: W0216 21:49:07.151694 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d6dc28_8ec7_4d64_9868_673d3ea42873.slice/crio-7c0aa792c406441b2505c1eea6e7655bbc208c133be2022118d59eb301defd92 WatchSource:0}: Error finding container 7c0aa792c406441b2505c1eea6e7655bbc208c133be2022118d59eb301defd92: Status 404 returned error can't find the container with id 7c0aa792c406441b2505c1eea6e7655bbc208c133be2022118d59eb301defd92 Feb 16 21:49:08 crc kubenswrapper[4792]: I0216 21:49:08.033199 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" event={"ID":"e8d6dc28-8ec7-4d64-9868-673d3ea42873","Type":"ContainerStarted","Data":"7c0aa792c406441b2505c1eea6e7655bbc208c133be2022118d59eb301defd92"} Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.078489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" event={"ID":"e2d0a7d0-53d6-4031-894c-734f67974527","Type":"ContainerStarted","Data":"d94f03ceedcb99239b699c142310d8ddaf506841f1db83b534ed8fad41ebf39b"} Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.079625 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.081161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" event={"ID":"e8d6dc28-8ec7-4d64-9868-673d3ea42873","Type":"ContainerStarted","Data":"5e8e107e5f9b74cbc98544a798d4f2408baa7c1d52209d868168cc10d285b503"} Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.082573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.114473 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6c9d97fb5-j4f5p" podStartSLOduration=1.326957164 podStartE2EDuration="14.114440044s" podCreationTimestamp="2026-02-16 21:49:01 +0000 UTC" firstStartedPulling="2026-02-16 21:49:02.033574526 +0000 UTC m=+674.686853417" lastFinishedPulling="2026-02-16 21:49:14.821057406 +0000 UTC m=+687.474336297" observedRunningTime="2026-02-16 21:49:15.109421083 +0000 UTC m=+687.762699974" watchObservedRunningTime="2026-02-16 21:49:15.114440044 +0000 UTC m=+687.767718935" Feb 16 21:49:15 crc kubenswrapper[4792]: I0216 21:49:15.143211 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-7bglt" podStartSLOduration=3.517412669 podStartE2EDuration="11.143178141s" podCreationTimestamp="2026-02-16 21:49:04 +0000 UTC" firstStartedPulling="2026-02-16 21:49:07.153712334 +0000 UTC m=+679.806991225" lastFinishedPulling="2026-02-16 21:49:14.779477806 +0000 UTC m=+687.432756697" observedRunningTime="2026-02-16 21:49:15.137665968 +0000 UTC m=+687.790944859" watchObservedRunningTime="2026-02-16 21:49:15.143178141 +0000 UTC m=+687.796457032" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.609354 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.610682 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.612745 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.616159 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.618316 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.689768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhbp\" (UniqueName: \"kubernetes.io/projected/7f91d6d6-dd13-435a-b8a1-526d7ef01d7f-kube-api-access-jjhbp\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.690110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.791574 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.791783 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhbp\" (UniqueName: \"kubernetes.io/projected/7f91d6d6-dd13-435a-b8a1-526d7ef01d7f-kube-api-access-jjhbp\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.795292 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.795348 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/25751a477b2b5a65f8608d3b90fe5ac7f413cb48de25c161887d5a8eff60d8d4/globalmount\"" pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.818333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhbp\" (UniqueName: \"kubernetes.io/projected/7f91d6d6-dd13-435a-b8a1-526d7ef01d7f-kube-api-access-jjhbp\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.827643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-469a6c7a-56bd-4beb-883c-935db4e50eec\") pod \"minio\" (UID: \"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f\") " pod="minio-dev/minio" Feb 16 21:49:19 crc kubenswrapper[4792]: I0216 21:49:19.927899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 21:49:20 crc kubenswrapper[4792]: I0216 21:49:20.326142 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 21:49:20 crc kubenswrapper[4792]: W0216 21:49:20.337519 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f91d6d6_dd13_435a_b8a1_526d7ef01d7f.slice/crio-a82e9d44259f611cd65fc9bf84154b0b86f301e4d96bcf0463144c628c303fb4 WatchSource:0}: Error finding container a82e9d44259f611cd65fc9bf84154b0b86f301e4d96bcf0463144c628c303fb4: Status 404 returned error can't find the container with id a82e9d44259f611cd65fc9bf84154b0b86f301e4d96bcf0463144c628c303fb4 Feb 16 21:49:21 crc kubenswrapper[4792]: I0216 21:49:21.129288 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f","Type":"ContainerStarted","Data":"a82e9d44259f611cd65fc9bf84154b0b86f301e4d96bcf0463144c628c303fb4"} Feb 16 21:49:24 crc kubenswrapper[4792]: I0216 21:49:24.159445 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"7f91d6d6-dd13-435a-b8a1-526d7ef01d7f","Type":"ContainerStarted","Data":"55c0f0e6be35abd61717c3c9a950a68a9f0b77c640da6714d41442eed7102b62"} Feb 16 21:49:24 crc kubenswrapper[4792]: I0216 21:49:24.181872 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.179274544 podStartE2EDuration="7.181853697s" podCreationTimestamp="2026-02-16 21:49:17 +0000 UTC" firstStartedPulling="2026-02-16 21:49:20.339802438 +0000 UTC m=+692.993081329" lastFinishedPulling="2026-02-16 21:49:23.342381591 +0000 UTC m=+695.995660482" observedRunningTime="2026-02-16 21:49:24.177952525 +0000 UTC m=+696.831231476" watchObservedRunningTime="2026-02-16 21:49:24.181853697 +0000 UTC m=+696.835132588" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.807071 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq"] Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.808548 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.812544 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.812901 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.813440 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.813744 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-hrrfm" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.814211 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.863775 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq"] Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.942683 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.942755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.942792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47sn7\" (UniqueName: \"kubernetes.io/projected/6c9676d6-4914-442f-b206-68319ef59156-kube-api-access-47sn7\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.942916 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.942950 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-config\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.961993 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-696l8"] Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.963095 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.968093 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.968401 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.968632 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 16 21:49:29 crc kubenswrapper[4792]: I0216 21:49:29.995350 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-696l8"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.044483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.044550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-config\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.044623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.044661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.044694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47sn7\" (UniqueName: \"kubernetes.io/projected/6c9676d6-4914-442f-b206-68319ef59156-kube-api-access-47sn7\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.045564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.046048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9676d6-4914-442f-b206-68319ef59156-config\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.051415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.055292 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6c9676d6-4914-442f-b206-68319ef59156-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.079731 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wks44"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.080990 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.083162 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.083353 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.099690 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47sn7\" (UniqueName: \"kubernetes.io/projected/6c9676d6-4914-442f-b206-68319ef59156-kube-api-access-47sn7\") pod \"logging-loki-distributor-5d5548c9f5-x5pvq\" (UID: \"6c9676d6-4914-442f-b206-68319ef59156\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.107402 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wks44"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.145546 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.147987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.148089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.148143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.148210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-config\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.148265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgm44\" (UniqueName: \"kubernetes.io/projected/1b78e491-c2b1-4381-b1df-4e53af021942-kube-api-access-mgm44\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.148304 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.193430 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-f5k5x"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.195091 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.201442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.201496 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.201678 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.201704 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.201772 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.213836 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-f5k5x"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.231485 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-p8dz5"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.232542 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.237039 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-rjhjl" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249627 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249711 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249734 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvlv\" (UniqueName: \"kubernetes.io/projected/3e0a446f-fcc8-40b8-81bc-fc80c8764582-kube-api-access-jnvlv\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249774 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249801 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-config\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249820 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-config\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249878 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgm44\" (UniqueName: \"kubernetes.io/projected/1b78e491-c2b1-4381-b1df-4e53af021942-kube-api-access-mgm44\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249901 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.249943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.250965 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.253055 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-p8dz5"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.253300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b78e491-c2b1-4381-b1df-4e53af021942-config\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.255979 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.256583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.256753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1b78e491-c2b1-4381-b1df-4e53af021942-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.280854 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgm44\" (UniqueName: \"kubernetes.io/projected/1b78e491-c2b1-4381-b1df-4e53af021942-kube-api-access-mgm44\") pod \"logging-loki-querier-76bf7b6d45-696l8\" (UID: \"1b78e491-c2b1-4381-b1df-4e53af021942\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-rbac\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351722 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tls-secret\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351798 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmfc\" (UniqueName: \"kubernetes.io/projected/89876142-9620-43ca-bc5e-d0615a643826-kube-api-access-tdmfc\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351834 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351853 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351876 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351905 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351951 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-rbac\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.351991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvlv\" (UniqueName: \"kubernetes.io/projected/3e0a446f-fcc8-40b8-81bc-fc80c8764582-kube-api-access-jnvlv\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tenants\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352060 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-config\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352076 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tenants\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352093 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nfp\" (UniqueName: \"kubernetes.io/projected/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-kube-api-access-l9nfp\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352116 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352161 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.352178 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tls-secret\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.353067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.357344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.358856 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0a446f-fcc8-40b8-81bc-fc80c8764582-config\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.364225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e0a446f-fcc8-40b8-81bc-fc80c8764582-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.374163 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvlv\" (UniqueName: \"kubernetes.io/projected/3e0a446f-fcc8-40b8-81bc-fc80c8764582-kube-api-access-jnvlv\") pod \"logging-loki-query-frontend-6d6859c548-wks44\" (UID: \"3e0a446f-fcc8-40b8-81bc-fc80c8764582\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.448210 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453446 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-rbac\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453462 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tenants\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tenants\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453578 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9nfp\" (UniqueName: \"kubernetes.io/projected/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-kube-api-access-l9nfp\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453646 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tls-secret\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453730 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-rbac\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tls-secret\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.453816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdmfc\" (UniqueName: \"kubernetes.io/projected/89876142-9620-43ca-bc5e-d0615a643826-kube-api-access-tdmfc\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.455691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-rbac\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.455720 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.457508 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tls-secret\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.457637 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-tenants\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.458117 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.458116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.458268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.458697 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-rbac\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.458892 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.459242 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-lokistack-gateway\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.459542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89876142-9620-43ca-bc5e-d0615a643826-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.460842 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tls-secret\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.462339 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.463167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/89876142-9620-43ca-bc5e-d0615a643826-tenants\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.478622 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdmfc\" (UniqueName: \"kubernetes.io/projected/89876142-9620-43ca-bc5e-d0615a643826-kube-api-access-tdmfc\") pod \"logging-loki-gateway-85f68b45f-p8dz5\" (UID: \"89876142-9620-43ca-bc5e-d0615a643826\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.480173 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9nfp\" (UniqueName: \"kubernetes.io/projected/e4cfe4c6-e37d-4507-9bed-c2f13c0978ff-kube-api-access-l9nfp\") pod \"logging-loki-gateway-85f68b45f-f5k5x\" (UID: \"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff\") " pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.543947 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.591053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.591715 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.669754 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq"] Feb 16 21:49:30 crc kubenswrapper[4792]: W0216 21:49:30.702716 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c9676d6_4914_442f_b206_68319ef59156.slice/crio-3ace50672aa489556e6ae488eeebd56cdce7039195437f443bf1cea95da53ad9 WatchSource:0}: Error finding container 3ace50672aa489556e6ae488eeebd56cdce7039195437f443bf1cea95da53ad9: Status 404 returned error can't find the container with id 3ace50672aa489556e6ae488eeebd56cdce7039195437f443bf1cea95da53ad9 Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.967038 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wks44"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.973890 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.974678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.976468 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.976869 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 16 21:49:30 crc kubenswrapper[4792]: I0216 21:49:30.987236 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.048099 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.050088 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.052540 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.052796 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.064851 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.110901 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-config\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.110943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmqk\" (UniqueName: \"kubernetes.io/projected/4857850b-9fec-45a6-8c45-9d13153372cf-kube-api-access-qjmqk\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.110969 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.111116 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.111173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.111709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.111811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.111860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.152254 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.159356 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.163732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.163922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.167312 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.199327 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-p8dz5"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" event={"ID":"89876142-9620-43ca-bc5e-d0615a643826","Type":"ContainerStarted","Data":"23d6cc40be22368c9ac8c4e1a34e4c756e76020d4d07dd1404259c4c950ed79a"} Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212614 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212659 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-config\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212688 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmqk\" (UniqueName: \"kubernetes.io/projected/4857850b-9fec-45a6-8c45-9d13153372cf-kube-api-access-qjmqk\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212748 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212808 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-config\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212845 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212863 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212881 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212898 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93d47eec-9baf-4f30-9173-8d96de939fff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93d47eec-9baf-4f30-9173-8d96de939fff\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9m4\" (UniqueName: \"kubernetes.io/projected/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-kube-api-access-gw9m4\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.212955 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.215666 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.215728 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f68b45f-f5k5x"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.216576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" event={"ID":"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff","Type":"ContainerStarted","Data":"5a0e0f71d37c414253f7d293dc841f10d005d0e51eef0eedbe34444adba66abb"} Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.216812 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.216840 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc8419de5ed2e7bbc14677da8f9e619d1a6b246c52f764d7e72cadbaea801d44/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.217060 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.217134 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d41113b8787f9ab51f81453f54d52d6ce94c48d28bd3ad3e1dbf30ecf836bb4e/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.219184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" event={"ID":"3e0a446f-fcc8-40b8-81bc-fc80c8764582","Type":"ContainerStarted","Data":"6ed1ce49312d89d7ba8baab803837e808dbccb66518a4a65f6f4807f92f72c98"} Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.220072 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.220166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.220246 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" event={"ID":"6c9676d6-4914-442f-b206-68319ef59156","Type":"ContainerStarted","Data":"3ace50672aa489556e6ae488eeebd56cdce7039195437f443bf1cea95da53ad9"} Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.220332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4857850b-9fec-45a6-8c45-9d13153372cf-config\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.220690 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4857850b-9fec-45a6-8c45-9d13153372cf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.228583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmqk\" (UniqueName: \"kubernetes.io/projected/4857850b-9fec-45a6-8c45-9d13153372cf-kube-api-access-qjmqk\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.248320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d31b1a99-37fc-4110-b31d-366feda7e72c\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.249110 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47d5d32f-17ee-4b88-8483-6c6668ee0039\") pod \"logging-loki-ingester-0\" (UID: \"4857850b-9fec-45a6-8c45-9d13153372cf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.307316 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314680 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlbp\" (UniqueName: \"kubernetes.io/projected/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-kube-api-access-srlbp\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314781 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314808 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-config\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314856 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314877 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314902 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-93d47eec-9baf-4f30-9173-8d96de939fff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93d47eec-9baf-4f30-9173-8d96de939fff\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.314961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.315006 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.315033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.315061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-153b8233-3807-4afb-b663-158f175937ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-153b8233-3807-4afb-b663-158f175937ff\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.315132 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9m4\" (UniqueName: \"kubernetes.io/projected/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-kube-api-access-gw9m4\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.316479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.318167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.318565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.318682 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-config\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.318873 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.318922 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-93d47eec-9baf-4f30-9173-8d96de939fff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93d47eec-9baf-4f30-9173-8d96de939fff\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bbac4ccf73d79731224e49dbbf6d1afb177064413144f9987b6358ff7e315de/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.321186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.328176 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-696l8"] Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.338487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9m4\" (UniqueName: \"kubernetes.io/projected/732fff3b-fe1d-4e49-96da-e18db7ce5e9b-kube-api-access-gw9m4\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.365857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-93d47eec-9baf-4f30-9173-8d96de939fff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93d47eec-9baf-4f30-9173-8d96de939fff\") pod \"logging-loki-compactor-0\" (UID: \"732fff3b-fe1d-4e49-96da-e18db7ce5e9b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.409853 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlbp\" (UniqueName: \"kubernetes.io/projected/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-kube-api-access-srlbp\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416886 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.416995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.417094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-153b8233-3807-4afb-b663-158f175937ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-153b8233-3807-4afb-b663-158f175937ff\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.421502 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.422382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.423168 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.424792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.425038 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.425071 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-153b8233-3807-4afb-b663-158f175937ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-153b8233-3807-4afb-b663-158f175937ff\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1072704639cac7b05b383ad7cf7ef176160fc758f78809cf1d6a9ba3a61b188c/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.428984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.435784 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlbp\" (UniqueName: \"kubernetes.io/projected/e322f3d3-92f8-4b24-88ea-a2189fc9c7fb-kube-api-access-srlbp\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.477113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-153b8233-3807-4afb-b663-158f175937ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-153b8233-3807-4afb-b663-158f175937ff\") pod \"logging-loki-index-gateway-0\" (UID: \"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.532997 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.533049 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.778139 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.797180 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: W0216 21:49:31.801374 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4857850b_9fec_45a6_8c45_9d13153372cf.slice/crio-d2861c6ec9b3d5d347425dbc30a6832f451a406592ed98e6d81effdd1d79f7b8 WatchSource:0}: Error finding container d2861c6ec9b3d5d347425dbc30a6832f451a406592ed98e6d81effdd1d79f7b8: Status 404 returned error can't find the container with id d2861c6ec9b3d5d347425dbc30a6832f451a406592ed98e6d81effdd1d79f7b8 Feb 16 21:49:31 crc kubenswrapper[4792]: I0216 21:49:31.861416 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 21:49:31 crc kubenswrapper[4792]: W0216 21:49:31.876858 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod732fff3b_fe1d_4e49_96da_e18db7ce5e9b.slice/crio-8ed6dc0dfd4ab2fea0709a5c24e7e436f57c93f63659d1b2f042369fb6a5d895 WatchSource:0}: Error finding container 8ed6dc0dfd4ab2fea0709a5c24e7e436f57c93f63659d1b2f042369fb6a5d895: Status 404 returned error can't find the container with id 8ed6dc0dfd4ab2fea0709a5c24e7e436f57c93f63659d1b2f042369fb6a5d895 Feb 16 21:49:32 crc kubenswrapper[4792]: I0216 21:49:32.232235 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4857850b-9fec-45a6-8c45-9d13153372cf","Type":"ContainerStarted","Data":"d2861c6ec9b3d5d347425dbc30a6832f451a406592ed98e6d81effdd1d79f7b8"} Feb 16 21:49:32 crc kubenswrapper[4792]: I0216 21:49:32.235207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" event={"ID":"1b78e491-c2b1-4381-b1df-4e53af021942","Type":"ContainerStarted","Data":"e323a9bb9603a882a0e58b8d3ebf434e59ca998cfc824e53658c34d7544437ee"} Feb 16 21:49:32 crc kubenswrapper[4792]: I0216 21:49:32.236687 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"732fff3b-fe1d-4e49-96da-e18db7ce5e9b","Type":"ContainerStarted","Data":"8ed6dc0dfd4ab2fea0709a5c24e7e436f57c93f63659d1b2f042369fb6a5d895"} Feb 16 21:49:32 crc kubenswrapper[4792]: I0216 21:49:32.258819 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 21:49:32 crc kubenswrapper[4792]: W0216 21:49:32.262395 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode322f3d3_92f8_4b24_88ea_a2189fc9c7fb.slice/crio-2e397cfe8da4db2203d5d29ec928db45a7c9c523dd7e61649201fbbceb54594e WatchSource:0}: Error finding container 2e397cfe8da4db2203d5d29ec928db45a7c9c523dd7e61649201fbbceb54594e: Status 404 returned error can't find the container with id 2e397cfe8da4db2203d5d29ec928db45a7c9c523dd7e61649201fbbceb54594e Feb 16 21:49:33 crc kubenswrapper[4792]: I0216 21:49:33.244435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb","Type":"ContainerStarted","Data":"2e397cfe8da4db2203d5d29ec928db45a7c9c523dd7e61649201fbbceb54594e"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.258499 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" event={"ID":"6c9676d6-4914-442f-b206-68319ef59156","Type":"ContainerStarted","Data":"6c877aff8fdb40659939231626ba8085a6a7c0baf20f5716661bb37ef8c9fa62"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.259940 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.263815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4857850b-9fec-45a6-8c45-9d13153372cf","Type":"ContainerStarted","Data":"58a29ca25f41f2c9142a0e38b1988c42abc76f731b726a145dfab290d2193c9f"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.263960 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.266702 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"e322f3d3-92f8-4b24-88ea-a2189fc9c7fb","Type":"ContainerStarted","Data":"f90a3347a481742f67e8bf778a6a3fc4e81eb9a15fb4d383f1565cc7b1d35cf1"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.266907 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.268122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" event={"ID":"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff","Type":"ContainerStarted","Data":"083cb1d348f071fd2736fba9ca0a2a446cd26e9985b904a4552a18d698c5b331"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.269131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" event={"ID":"89876142-9620-43ca-bc5e-d0615a643826","Type":"ContainerStarted","Data":"3de476bcf3ff260e8d035fe01c89a9c7b8426d02ddf9fb615e80652008bd0ba1"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.270357 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" event={"ID":"3e0a446f-fcc8-40b8-81bc-fc80c8764582","Type":"ContainerStarted","Data":"8b07be00564e3f777f480419f24865e23a6957d260c830fd6abae810c7948a67"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.270527 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.271993 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" event={"ID":"1b78e491-c2b1-4381-b1df-4e53af021942","Type":"ContainerStarted","Data":"fccc5b7e57757d961d25a0dd4a29799bd8a92ced5b1b6b375243305244f52f72"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.272092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.273100 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"732fff3b-fe1d-4e49-96da-e18db7ce5e9b","Type":"ContainerStarted","Data":"39fdf57f7f8fe4abd08945e6410453a7e328c9acfe41a628a4be84523b41597f"} Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.273250 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.281461 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" podStartSLOduration=2.476698384 podStartE2EDuration="6.281440523s" podCreationTimestamp="2026-02-16 21:49:29 +0000 UTC" firstStartedPulling="2026-02-16 21:49:30.714776875 +0000 UTC m=+703.368055766" lastFinishedPulling="2026-02-16 21:49:34.519519014 +0000 UTC m=+707.172797905" observedRunningTime="2026-02-16 21:49:35.27513018 +0000 UTC m=+707.928409071" watchObservedRunningTime="2026-02-16 21:49:35.281440523 +0000 UTC m=+707.934719414" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.301538 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.518653884 podStartE2EDuration="6.301518616s" podCreationTimestamp="2026-02-16 21:49:29 +0000 UTC" firstStartedPulling="2026-02-16 21:49:31.804921207 +0000 UTC m=+704.458200098" lastFinishedPulling="2026-02-16 21:49:34.587785939 +0000 UTC m=+707.241064830" observedRunningTime="2026-02-16 21:49:35.292415339 +0000 UTC m=+707.945694240" watchObservedRunningTime="2026-02-16 21:49:35.301518616 +0000 UTC m=+707.954797517" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.327316 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" podStartSLOduration=1.7315192700000002 podStartE2EDuration="5.327294886s" podCreationTimestamp="2026-02-16 21:49:30 +0000 UTC" firstStartedPulling="2026-02-16 21:49:30.990664018 +0000 UTC m=+703.643942909" lastFinishedPulling="2026-02-16 21:49:34.586439634 +0000 UTC m=+707.239718525" observedRunningTime="2026-02-16 21:49:35.318956159 +0000 UTC m=+707.972235060" watchObservedRunningTime="2026-02-16 21:49:35.327294886 +0000 UTC m=+707.980573777" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.348521 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" podStartSLOduration=3.085867882 podStartE2EDuration="6.348501547s" podCreationTimestamp="2026-02-16 21:49:29 +0000 UTC" firstStartedPulling="2026-02-16 21:49:31.358519042 +0000 UTC m=+704.011797953" lastFinishedPulling="2026-02-16 21:49:34.621152717 +0000 UTC m=+707.274431618" observedRunningTime="2026-02-16 21:49:35.337425479 +0000 UTC m=+707.990704370" watchObservedRunningTime="2026-02-16 21:49:35.348501547 +0000 UTC m=+708.001780438" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.357496 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.659084016 podStartE2EDuration="5.357478991s" podCreationTimestamp="2026-02-16 21:49:30 +0000 UTC" firstStartedPulling="2026-02-16 21:49:31.879810055 +0000 UTC m=+704.533088946" lastFinishedPulling="2026-02-16 21:49:34.57820503 +0000 UTC m=+707.231483921" observedRunningTime="2026-02-16 21:49:35.355270423 +0000 UTC m=+708.008549314" watchObservedRunningTime="2026-02-16 21:49:35.357478991 +0000 UTC m=+708.010757882" Feb 16 21:49:35 crc kubenswrapper[4792]: I0216 21:49:35.379545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.05897518 podStartE2EDuration="5.379528563s" podCreationTimestamp="2026-02-16 21:49:30 +0000 UTC" firstStartedPulling="2026-02-16 21:49:32.265320967 +0000 UTC m=+704.918599858" lastFinishedPulling="2026-02-16 21:49:34.58587435 +0000 UTC m=+707.239153241" observedRunningTime="2026-02-16 21:49:35.374315938 +0000 UTC m=+708.027594839" watchObservedRunningTime="2026-02-16 21:49:35.379528563 +0000 UTC m=+708.032807454" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.337277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" event={"ID":"89876142-9620-43ca-bc5e-d0615a643826","Type":"ContainerStarted","Data":"71d0cc51170409f52ddcfb47a729c2a89aa481990d73b4dc4b1354f45dbf4c9f"} Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.337888 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.339394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" event={"ID":"e4cfe4c6-e37d-4507-9bed-c2f13c0978ff","Type":"ContainerStarted","Data":"8b92df4c6652a195a6fecd01f61519bcce75f9a35caacd36426b2d6019d124c0"} Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.339873 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.339917 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.348930 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.350274 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.354878 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.361493 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" podStartSLOduration=2.412041293 podStartE2EDuration="11.361476578s" podCreationTimestamp="2026-02-16 21:49:30 +0000 UTC" firstStartedPulling="2026-02-16 21:49:31.204704813 +0000 UTC m=+703.857983704" lastFinishedPulling="2026-02-16 21:49:40.154140098 +0000 UTC m=+712.807418989" observedRunningTime="2026-02-16 21:49:41.356281212 +0000 UTC m=+714.009560103" watchObservedRunningTime="2026-02-16 21:49:41.361476578 +0000 UTC m=+714.014755469" Feb 16 21:49:41 crc kubenswrapper[4792]: I0216 21:49:41.402087 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85f68b45f-f5k5x" podStartSLOduration=2.460889152 podStartE2EDuration="11.402066643s" podCreationTimestamp="2026-02-16 21:49:30 +0000 UTC" firstStartedPulling="2026-02-16 21:49:31.205255567 +0000 UTC m=+703.858534458" lastFinishedPulling="2026-02-16 21:49:40.146433058 +0000 UTC m=+712.799711949" observedRunningTime="2026-02-16 21:49:41.396466067 +0000 UTC m=+714.049744958" watchObservedRunningTime="2026-02-16 21:49:41.402066643 +0000 UTC m=+714.055345534" Feb 16 21:49:42 crc kubenswrapper[4792]: I0216 21:49:42.345142 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:42 crc kubenswrapper[4792]: I0216 21:49:42.423948 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f68b45f-p8dz5" Feb 16 21:49:50 crc kubenswrapper[4792]: I0216 21:49:50.155800 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x5pvq" Feb 16 21:49:50 crc kubenswrapper[4792]: I0216 21:49:50.453217 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wks44" Feb 16 21:49:50 crc kubenswrapper[4792]: I0216 21:49:50.597450 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-696l8" Feb 16 21:49:51 crc kubenswrapper[4792]: I0216 21:49:51.315489 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 21:49:51 crc kubenswrapper[4792]: I0216 21:49:51.315551 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4857850b-9fec-45a6-8c45-9d13153372cf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:49:51 crc kubenswrapper[4792]: I0216 21:49:51.425356 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 16 21:49:51 crc kubenswrapper[4792]: I0216 21:49:51.784330 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 21:50:01 crc kubenswrapper[4792]: I0216 21:50:01.316041 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 21:50:01 crc kubenswrapper[4792]: I0216 21:50:01.316467 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4857850b-9fec-45a6-8c45-9d13153372cf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:50:01 crc kubenswrapper[4792]: I0216 21:50:01.532292 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:50:01 crc kubenswrapper[4792]: I0216 21:50:01.532384 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:50:11 crc kubenswrapper[4792]: I0216 21:50:11.312080 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 21:50:11 crc kubenswrapper[4792]: I0216 21:50:11.312800 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4857850b-9fec-45a6-8c45-9d13153372cf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:50:21 crc kubenswrapper[4792]: I0216 21:50:21.315472 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 21:50:21 crc kubenswrapper[4792]: I0216 21:50:21.316174 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4857850b-9fec-45a6-8c45-9d13153372cf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 21:50:21 crc kubenswrapper[4792]: I0216 21:50:21.680120 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.313873 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.532221 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.532543 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.532583 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.533196 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.533258 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2" gracePeriod=600 Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.749002 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2" exitCode=0 Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.749066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2"} Feb 16 21:50:31 crc kubenswrapper[4792]: I0216 21:50:31.749135 4792 scope.go:117] "RemoveContainer" containerID="9272c7263fc79bf4b80d98a592fd7f6d5b1774c4c4cac8d1e6c3bd5c3ce2b59b" Feb 16 21:50:32 crc kubenswrapper[4792]: I0216 21:50:32.760506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7"} Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.046216 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-6lllz"] Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.053036 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.056723 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.057000 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.058152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.058419 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p8h94" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.069912 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.070369 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-6lllz"] Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.091313 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.159405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.159496 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.159544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.159650 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.159879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.160235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.160569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppxk\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.160961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.160994 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.161038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.161065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.200021 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-6lllz"] Feb 16 21:50:49 crc kubenswrapper[4792]: E0216 21:50:49.200914 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-gppxk metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-6lllz" podUID="7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.262842 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppxk\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263606 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.263944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.264024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.264534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.264850 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.266312 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.266380 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.269433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.270811 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.282038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.285548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.285862 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.291945 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppxk\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk\") pod \"collector-6lllz\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.908811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6lllz" Feb 16 21:50:49 crc kubenswrapper[4792]: I0216 21:50:49.920678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6lllz" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075032 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075095 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075292 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075330 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075371 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075406 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075536 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gppxk\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075660 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075691 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint\") pod \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\" (UID: \"7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42\") " Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.075955 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir" (OuterVolumeSpecName: "datadir") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.076166 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config" (OuterVolumeSpecName: "config") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.076764 4792 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-datadir\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.076791 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.076860 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.077863 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.078178 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.079870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token" (OuterVolumeSpecName: "collector-token") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.079934 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk" (OuterVolumeSpecName: "kube-api-access-gppxk") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "kube-api-access-gppxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.082146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.082655 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token" (OuterVolumeSpecName: "sa-token") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.082887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp" (OuterVolumeSpecName: "tmp") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.083186 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics" (OuterVolumeSpecName: "metrics") pod "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" (UID: "7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179833 4792 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179888 4792 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179907 4792 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-token\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179925 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gppxk\" (UniqueName: \"kubernetes.io/projected/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-kube-api-access-gppxk\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179942 4792 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179959 4792 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179977 4792 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.179995 4792 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.919880 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6lllz" Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.989404 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-6lllz"] Feb 16 21:50:50 crc kubenswrapper[4792]: I0216 21:50:50.999431 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-6lllz"] Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.017920 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-9nkvn"] Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.019112 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.020986 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.021164 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.021591 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.021789 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.022922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p8h94" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.045108 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.046071 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9nkvn"] Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e3e938a2-8839-497e-ba02-7d1f5e2a1998-datadir\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4lf\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-kube-api-access-7j4lf\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-entrypoint\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096341 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config-openshift-service-cacrt\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e3e938a2-8839-497e-ba02-7d1f5e2a1998-tmp\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096464 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-sa-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096495 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-trusted-ca\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096523 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-syslog-receiver\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.096577 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-metrics\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198342 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config-openshift-service-cacrt\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198678 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e3e938a2-8839-497e-ba02-7d1f5e2a1998-tmp\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198730 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-sa-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-trusted-ca\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198819 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-syslog-receiver\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.198883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-metrics\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.199104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e3e938a2-8839-497e-ba02-7d1f5e2a1998-datadir\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.199200 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4lf\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-kube-api-access-7j4lf\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.199290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-entrypoint\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.199375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.199588 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e3e938a2-8839-497e-ba02-7d1f5e2a1998-datadir\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.200113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config-openshift-service-cacrt\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.201344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-trusted-ca\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.201399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-config\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.201455 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e3e938a2-8839-497e-ba02-7d1f5e2a1998-entrypoint\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.205739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e3e938a2-8839-497e-ba02-7d1f5e2a1998-tmp\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.206184 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-syslog-receiver\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.206830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-metrics\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.208476 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e3e938a2-8839-497e-ba02-7d1f5e2a1998-collector-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.221059 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-sa-token\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.228396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4lf\" (UniqueName: \"kubernetes.io/projected/e3e938a2-8839-497e-ba02-7d1f5e2a1998-kube-api-access-7j4lf\") pod \"collector-9nkvn\" (UID: \"e3e938a2-8839-497e-ba02-7d1f5e2a1998\") " pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.348532 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9nkvn" Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.830663 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9nkvn"] Feb 16 21:50:51 crc kubenswrapper[4792]: I0216 21:50:51.928507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-9nkvn" event={"ID":"e3e938a2-8839-497e-ba02-7d1f5e2a1998","Type":"ContainerStarted","Data":"2eb41684b5a0642ef4b3159c52f2156bd4b46e6935ed0a18d41d6e68c1ef0243"} Feb 16 21:50:52 crc kubenswrapper[4792]: I0216 21:50:52.036386 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42" path="/var/lib/kubelet/pods/7b5a0b9b-8450-49f2-9011-d3aa2ef7bc42/volumes" Feb 16 21:50:58 crc kubenswrapper[4792]: I0216 21:50:58.990018 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-9nkvn" event={"ID":"e3e938a2-8839-497e-ba02-7d1f5e2a1998","Type":"ContainerStarted","Data":"e642fa4b5fed88bea0dc93738f51f2f5f8f8e70fb38f8a75656e008d2da5a58d"} Feb 16 21:50:59 crc kubenswrapper[4792]: I0216 21:50:59.014985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-9nkvn" podStartSLOduration=2.961001597 podStartE2EDuration="9.014962662s" podCreationTimestamp="2026-02-16 21:50:50 +0000 UTC" firstStartedPulling="2026-02-16 21:50:51.840340273 +0000 UTC m=+784.493619194" lastFinishedPulling="2026-02-16 21:50:57.894301358 +0000 UTC m=+790.547580259" observedRunningTime="2026-02-16 21:50:59.012547782 +0000 UTC m=+791.665826693" watchObservedRunningTime="2026-02-16 21:50:59.014962662 +0000 UTC m=+791.668241553" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.489135 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl"] Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.491267 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.492981 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.501719 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl"] Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.563934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds45\" (UniqueName: \"kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.564040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.564307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.666800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mds45\" (UniqueName: \"kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.666914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.667052 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.667878 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.667921 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.691920 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mds45\" (UniqueName: \"kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:28 crc kubenswrapper[4792]: I0216 21:51:28.809399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:29 crc kubenswrapper[4792]: I0216 21:51:29.249367 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl"] Feb 16 21:51:29 crc kubenswrapper[4792]: I0216 21:51:29.277140 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" event={"ID":"7378542c-ef2c-46ad-af40-8f08005d9537","Type":"ContainerStarted","Data":"741aceacfb5043903a07a804a5a92f4f28ba40184d96b701d2ef6f4d889825fe"} Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.195457 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.196724 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.214408 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.285056 4792 generic.go:334] "Generic (PLEG): container finished" podID="7378542c-ef2c-46ad-af40-8f08005d9537" containerID="f8c31559c1a3326d76aa6de540ac315202f4da63053d75636462e4e10195054e" exitCode=0 Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.285118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" event={"ID":"7378542c-ef2c-46ad-af40-8f08005d9537","Type":"ContainerDied","Data":"f8c31559c1a3326d76aa6de540ac315202f4da63053d75636462e4e10195054e"} Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.290549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.290607 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.290643 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5nf\" (UniqueName: \"kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.392085 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.392133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.392173 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5nf\" (UniqueName: \"kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.392741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.392982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.414079 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5nf\" (UniqueName: \"kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf\") pod \"redhat-operators-t89ww\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.563952 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:30 crc kubenswrapper[4792]: I0216 21:51:30.989658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:31 crc kubenswrapper[4792]: I0216 21:51:31.298796 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerID="e6a580920ed7119b50f2b03ab2494e66b40e9282622ee23b2c17bd1e1df2569b" exitCode=0 Feb 16 21:51:31 crc kubenswrapper[4792]: I0216 21:51:31.298910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerDied","Data":"e6a580920ed7119b50f2b03ab2494e66b40e9282622ee23b2c17bd1e1df2569b"} Feb 16 21:51:31 crc kubenswrapper[4792]: I0216 21:51:31.299904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerStarted","Data":"9c19acaaade79a65290a53eafd74256bcc8d28fbb707371fd549b6064df01e32"} Feb 16 21:51:32 crc kubenswrapper[4792]: I0216 21:51:32.310485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerStarted","Data":"f401645f65ad20f7743361479f9dae53b36834780df573383f45cdc5183474a2"} Feb 16 21:51:32 crc kubenswrapper[4792]: I0216 21:51:32.315120 4792 generic.go:334] "Generic (PLEG): container finished" podID="7378542c-ef2c-46ad-af40-8f08005d9537" containerID="4fee1a272f958a2946eea3aa6672c10a08c12cdfc573966c6d4c176b39dafbd1" exitCode=0 Feb 16 21:51:32 crc kubenswrapper[4792]: I0216 21:51:32.315174 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" event={"ID":"7378542c-ef2c-46ad-af40-8f08005d9537","Type":"ContainerDied","Data":"4fee1a272f958a2946eea3aa6672c10a08c12cdfc573966c6d4c176b39dafbd1"} Feb 16 21:51:33 crc kubenswrapper[4792]: I0216 21:51:33.324270 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerID="f401645f65ad20f7743361479f9dae53b36834780df573383f45cdc5183474a2" exitCode=0 Feb 16 21:51:33 crc kubenswrapper[4792]: I0216 21:51:33.324338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerDied","Data":"f401645f65ad20f7743361479f9dae53b36834780df573383f45cdc5183474a2"} Feb 16 21:51:33 crc kubenswrapper[4792]: I0216 21:51:33.328101 4792 generic.go:334] "Generic (PLEG): container finished" podID="7378542c-ef2c-46ad-af40-8f08005d9537" containerID="b83c46cf2c6bf8b86754034477f944c4bbc1965056f7aefd3b27c748d360cb8b" exitCode=0 Feb 16 21:51:33 crc kubenswrapper[4792]: I0216 21:51:33.328143 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" event={"ID":"7378542c-ef2c-46ad-af40-8f08005d9537","Type":"ContainerDied","Data":"b83c46cf2c6bf8b86754034477f944c4bbc1965056f7aefd3b27c748d360cb8b"} Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.337336 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerStarted","Data":"07342f312f2865377f57d823f104651c54354b1926128f205bb5c3bf519bb473"} Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.356203 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t89ww" podStartSLOduration=1.9505016720000001 podStartE2EDuration="4.356182518s" podCreationTimestamp="2026-02-16 21:51:30 +0000 UTC" firstStartedPulling="2026-02-16 21:51:31.300183286 +0000 UTC m=+823.953462177" lastFinishedPulling="2026-02-16 21:51:33.705864132 +0000 UTC m=+826.359143023" observedRunningTime="2026-02-16 21:51:34.355784918 +0000 UTC m=+827.009063809" watchObservedRunningTime="2026-02-16 21:51:34.356182518 +0000 UTC m=+827.009461409" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.626188 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.658035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mds45\" (UniqueName: \"kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45\") pod \"7378542c-ef2c-46ad-af40-8f08005d9537\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.658152 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle\") pod \"7378542c-ef2c-46ad-af40-8f08005d9537\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.658300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util\") pod \"7378542c-ef2c-46ad-af40-8f08005d9537\" (UID: \"7378542c-ef2c-46ad-af40-8f08005d9537\") " Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.658774 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle" (OuterVolumeSpecName: "bundle") pod "7378542c-ef2c-46ad-af40-8f08005d9537" (UID: "7378542c-ef2c-46ad-af40-8f08005d9537"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.666843 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45" (OuterVolumeSpecName: "kube-api-access-mds45") pod "7378542c-ef2c-46ad-af40-8f08005d9537" (UID: "7378542c-ef2c-46ad-af40-8f08005d9537"). InnerVolumeSpecName "kube-api-access-mds45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.677057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util" (OuterVolumeSpecName: "util") pod "7378542c-ef2c-46ad-af40-8f08005d9537" (UID: "7378542c-ef2c-46ad-af40-8f08005d9537"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.759954 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.759994 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7378542c-ef2c-46ad-af40-8f08005d9537-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:34 crc kubenswrapper[4792]: I0216 21:51:34.760003 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mds45\" (UniqueName: \"kubernetes.io/projected/7378542c-ef2c-46ad-af40-8f08005d9537-kube-api-access-mds45\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:35 crc kubenswrapper[4792]: I0216 21:51:35.346162 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" event={"ID":"7378542c-ef2c-46ad-af40-8f08005d9537","Type":"ContainerDied","Data":"741aceacfb5043903a07a804a5a92f4f28ba40184d96b701d2ef6f4d889825fe"} Feb 16 21:51:35 crc kubenswrapper[4792]: I0216 21:51:35.346226 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741aceacfb5043903a07a804a5a92f4f28ba40184d96b701d2ef6f4d889825fe" Feb 16 21:51:35 crc kubenswrapper[4792]: I0216 21:51:35.346184 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.500899 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-65zbh"] Feb 16 21:51:39 crc kubenswrapper[4792]: E0216 21:51:39.501787 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="extract" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.501807 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="extract" Feb 16 21:51:39 crc kubenswrapper[4792]: E0216 21:51:39.501824 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="pull" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.501834 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="pull" Feb 16 21:51:39 crc kubenswrapper[4792]: E0216 21:51:39.501852 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="util" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.501863 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="util" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.502045 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7378542c-ef2c-46ad-af40-8f08005d9537" containerName="extract" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.502725 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.506436 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.516224 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.516662 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6m8bt" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.518302 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-65zbh"] Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.632192 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gssr\" (UniqueName: \"kubernetes.io/projected/8eb6adaa-1be6-408f-b428-ccdb580dfb6a-kube-api-access-5gssr\") pod \"nmstate-operator-694c9596b7-65zbh\" (UID: \"8eb6adaa-1be6-408f-b428-ccdb580dfb6a\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.733418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gssr\" (UniqueName: \"kubernetes.io/projected/8eb6adaa-1be6-408f-b428-ccdb580dfb6a-kube-api-access-5gssr\") pod \"nmstate-operator-694c9596b7-65zbh\" (UID: \"8eb6adaa-1be6-408f-b428-ccdb580dfb6a\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.753038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gssr\" (UniqueName: \"kubernetes.io/projected/8eb6adaa-1be6-408f-b428-ccdb580dfb6a-kube-api-access-5gssr\") pod \"nmstate-operator-694c9596b7-65zbh\" (UID: \"8eb6adaa-1be6-408f-b428-ccdb580dfb6a\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" Feb 16 21:51:39 crc kubenswrapper[4792]: I0216 21:51:39.829628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" Feb 16 21:51:40 crc kubenswrapper[4792]: I0216 21:51:40.277321 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-65zbh"] Feb 16 21:51:40 crc kubenswrapper[4792]: W0216 21:51:40.289751 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb6adaa_1be6_408f_b428_ccdb580dfb6a.slice/crio-9cdd67f0dd4bf00288b36d26e4b26cc93fe8770d30fe5704877af6085881e704 WatchSource:0}: Error finding container 9cdd67f0dd4bf00288b36d26e4b26cc93fe8770d30fe5704877af6085881e704: Status 404 returned error can't find the container with id 9cdd67f0dd4bf00288b36d26e4b26cc93fe8770d30fe5704877af6085881e704 Feb 16 21:51:40 crc kubenswrapper[4792]: I0216 21:51:40.381268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" event={"ID":"8eb6adaa-1be6-408f-b428-ccdb580dfb6a","Type":"ContainerStarted","Data":"9cdd67f0dd4bf00288b36d26e4b26cc93fe8770d30fe5704877af6085881e704"} Feb 16 21:51:40 crc kubenswrapper[4792]: I0216 21:51:40.565246 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:40 crc kubenswrapper[4792]: I0216 21:51:40.565305 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:40 crc kubenswrapper[4792]: I0216 21:51:40.613001 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:41 crc kubenswrapper[4792]: I0216 21:51:41.462520 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:42 crc kubenswrapper[4792]: I0216 21:51:42.397813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" event={"ID":"8eb6adaa-1be6-408f-b428-ccdb580dfb6a","Type":"ContainerStarted","Data":"93e5c937b6fa1b2f35166206f574cc0ce606d80e3f0567de36a02bbb9cfdce10"} Feb 16 21:51:42 crc kubenswrapper[4792]: I0216 21:51:42.412136 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-65zbh" podStartSLOduration=1.720809911 podStartE2EDuration="3.412117635s" podCreationTimestamp="2026-02-16 21:51:39 +0000 UTC" firstStartedPulling="2026-02-16 21:51:40.291115988 +0000 UTC m=+832.944394879" lastFinishedPulling="2026-02-16 21:51:41.982423712 +0000 UTC m=+834.635702603" observedRunningTime="2026-02-16 21:51:42.410661145 +0000 UTC m=+835.063940056" watchObservedRunningTime="2026-02-16 21:51:42.412117635 +0000 UTC m=+835.065396526" Feb 16 21:51:42 crc kubenswrapper[4792]: I0216 21:51:42.993198 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:43 crc kubenswrapper[4792]: I0216 21:51:43.403448 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t89ww" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="registry-server" containerID="cri-o://07342f312f2865377f57d823f104651c54354b1926128f205bb5c3bf519bb473" gracePeriod=2 Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.413356 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerID="07342f312f2865377f57d823f104651c54354b1926128f205bb5c3bf519bb473" exitCode=0 Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.413453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerDied","Data":"07342f312f2865377f57d823f104651c54354b1926128f205bb5c3bf519bb473"} Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.413691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t89ww" event={"ID":"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5","Type":"ContainerDied","Data":"9c19acaaade79a65290a53eafd74256bcc8d28fbb707371fd549b6064df01e32"} Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.413706 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c19acaaade79a65290a53eafd74256bcc8d28fbb707371fd549b6064df01e32" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.427487 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.613567 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content\") pod \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.613695 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities\") pod \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.613933 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv5nf\" (UniqueName: \"kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf\") pod \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\" (UID: \"8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5\") " Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.615365 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities" (OuterVolumeSpecName: "utilities") pod "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" (UID: "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.623911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf" (OuterVolumeSpecName: "kube-api-access-qv5nf") pod "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" (UID: "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5"). InnerVolumeSpecName "kube-api-access-qv5nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.716114 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv5nf\" (UniqueName: \"kubernetes.io/projected/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-kube-api-access-qv5nf\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.716168 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.758235 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" (UID: "8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:51:44 crc kubenswrapper[4792]: I0216 21:51:44.817698 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:51:45 crc kubenswrapper[4792]: I0216 21:51:45.419578 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t89ww" Feb 16 21:51:45 crc kubenswrapper[4792]: I0216 21:51:45.453672 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:45 crc kubenswrapper[4792]: I0216 21:51:45.458480 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t89ww"] Feb 16 21:51:46 crc kubenswrapper[4792]: I0216 21:51:46.036121 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" path="/var/lib/kubelet/pods/8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5/volumes" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.928461 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc"] Feb 16 21:51:49 crc kubenswrapper[4792]: E0216 21:51:49.929432 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="extract-content" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.929448 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="extract-content" Feb 16 21:51:49 crc kubenswrapper[4792]: E0216 21:51:49.929474 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="registry-server" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.929481 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="registry-server" Feb 16 21:51:49 crc kubenswrapper[4792]: E0216 21:51:49.929495 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="extract-utilities" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.929503 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="extract-utilities" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.929674 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9e4bce-85d4-45ea-b9f1-0ac473bbb5f5" containerName="registry-server" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.930716 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.935681 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x5md4" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.948724 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg"] Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.949729 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.952070 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.954365 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg"] Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.960821 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-llwc8"] Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.961747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:49 crc kubenswrapper[4792]: I0216 21:51:49.968185 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc"] Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004659 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld52m\" (UniqueName: \"kubernetes.io/projected/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-kube-api-access-ld52m\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004713 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-nmstate-lock\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-dbus-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004786 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-ovs-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004818 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004854 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhd75\" (UniqueName: \"kubernetes.io/projected/a0c35ce8-00e1-4421-9a89-a335e12d0d71-kube-api-access-lhd75\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.004884 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdv8x\" (UniqueName: \"kubernetes.io/projected/06b05942-626d-480f-bae3-80eafaef0fa5-kube-api-access-sdv8x\") pod \"nmstate-metrics-58c85c668d-gdhtc\" (UID: \"06b05942-626d-480f-bae3-80eafaef0fa5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.089331 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz"] Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.090350 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.095900 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.096307 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.096638 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p7svw" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.106411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld52m\" (UniqueName: \"kubernetes.io/projected/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-kube-api-access-ld52m\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.106824 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-nmstate-lock\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.106950 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz"] Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107018 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-nmstate-lock\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107654 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-dbus-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107875 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-dbus-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-ovs-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-ovs-socket\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.107999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: E0216 21:51:50.108093 4792 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 21:51:50 crc kubenswrapper[4792]: E0216 21:51:50.108142 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair podName:a0c35ce8-00e1-4421-9a89-a335e12d0d71 nodeName:}" failed. No retries permitted until 2026-02-16 21:51:50.60812623 +0000 UTC m=+843.261405121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair") pod "nmstate-webhook-866bcb46dc-kk8rg" (UID: "a0c35ce8-00e1-4421-9a89-a335e12d0d71") : secret "openshift-nmstate-webhook" not found Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.108295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82fx\" (UniqueName: \"kubernetes.io/projected/f641b77f-8af3-4104-80c3-e07504d086d1-kube-api-access-l82fx\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.108341 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhd75\" (UniqueName: \"kubernetes.io/projected/a0c35ce8-00e1-4421-9a89-a335e12d0d71-kube-api-access-lhd75\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.108362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.108532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f641b77f-8af3-4104-80c3-e07504d086d1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.108572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdv8x\" (UniqueName: \"kubernetes.io/projected/06b05942-626d-480f-bae3-80eafaef0fa5-kube-api-access-sdv8x\") pod \"nmstate-metrics-58c85c668d-gdhtc\" (UID: \"06b05942-626d-480f-bae3-80eafaef0fa5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.133763 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdv8x\" (UniqueName: \"kubernetes.io/projected/06b05942-626d-480f-bae3-80eafaef0fa5-kube-api-access-sdv8x\") pod \"nmstate-metrics-58c85c668d-gdhtc\" (UID: \"06b05942-626d-480f-bae3-80eafaef0fa5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.134051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhd75\" (UniqueName: \"kubernetes.io/projected/a0c35ce8-00e1-4421-9a89-a335e12d0d71-kube-api-access-lhd75\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.137019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld52m\" (UniqueName: \"kubernetes.io/projected/dd045bc0-e27a-4fc1-808c-dd7aec8fce07-kube-api-access-ld52m\") pod \"nmstate-handler-llwc8\" (UID: \"dd045bc0-e27a-4fc1-808c-dd7aec8fce07\") " pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.210552 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82fx\" (UniqueName: \"kubernetes.io/projected/f641b77f-8af3-4104-80c3-e07504d086d1-kube-api-access-l82fx\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.210911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.210935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f641b77f-8af3-4104-80c3-e07504d086d1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: E0216 21:51:50.211055 4792 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 16 21:51:50 crc kubenswrapper[4792]: E0216 21:51:50.211116 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert podName:f641b77f-8af3-4104-80c3-e07504d086d1 nodeName:}" failed. No retries permitted until 2026-02-16 21:51:50.711099897 +0000 UTC m=+843.364378788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-sdtfz" (UID: "f641b77f-8af3-4104-80c3-e07504d086d1") : secret "plugin-serving-cert" not found Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.212032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f641b77f-8af3-4104-80c3-e07504d086d1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.230671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82fx\" (UniqueName: \"kubernetes.io/projected/f641b77f-8af3-4104-80c3-e07504d086d1-kube-api-access-l82fx\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.255463 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.281052 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.286836 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.293071 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.297570 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.318772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.318862 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82vz\" (UniqueName: \"kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.318944 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.318977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.319003 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.319030 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.319055 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: W0216 21:51:50.347161 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd045bc0_e27a_4fc1_808c_dd7aec8fce07.slice/crio-196bfdde40e7c2e1ad232396d0fd438d9cc59c84c1a7e4b04ac051d1eafab579 WatchSource:0}: Error finding container 196bfdde40e7c2e1ad232396d0fd438d9cc59c84c1a7e4b04ac051d1eafab579: Status 404 returned error can't find the container with id 196bfdde40e7c2e1ad232396d0fd438d9cc59c84c1a7e4b04ac051d1eafab579 Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421044 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421336 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421399 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421567 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82vz\" (UniqueName: \"kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.421691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.423458 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.426093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.426711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.427109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.428504 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.428505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.443310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82vz\" (UniqueName: \"kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz\") pod \"console-6b4c75486b-tlvk9\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.473324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-llwc8" event={"ID":"dd045bc0-e27a-4fc1-808c-dd7aec8fce07","Type":"ContainerStarted","Data":"196bfdde40e7c2e1ad232396d0fd438d9cc59c84c1a7e4b04ac051d1eafab579"} Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.625142 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.627807 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a0c35ce8-00e1-4421-9a89-a335e12d0d71-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kk8rg\" (UID: \"a0c35ce8-00e1-4421-9a89-a335e12d0d71\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.647223 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.726925 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.730145 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f641b77f-8af3-4104-80c3-e07504d086d1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sdtfz\" (UID: \"f641b77f-8af3-4104-80c3-e07504d086d1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.758109 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc"] Feb 16 21:51:50 crc kubenswrapper[4792]: W0216 21:51:50.774121 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06b05942_626d_480f_bae3_80eafaef0fa5.slice/crio-44da66d175797211021b905806c31774c9a0b3d9aef1d8d7df2e1d6aa4bfba70 WatchSource:0}: Error finding container 44da66d175797211021b905806c31774c9a0b3d9aef1d8d7df2e1d6aa4bfba70: Status 404 returned error can't find the container with id 44da66d175797211021b905806c31774c9a0b3d9aef1d8d7df2e1d6aa4bfba70 Feb 16 21:51:50 crc kubenswrapper[4792]: I0216 21:51:50.883834 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.006499 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.091881 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg"] Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.103648 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:51:51 crc kubenswrapper[4792]: W0216 21:51:51.116730 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07c162cb_aadc_4abf_a1f6_3875f813417d.slice/crio-e84ecf9032fb57d2f417a8bee87747a0ec66d1a50be3b5ece2dc58f44d664602 WatchSource:0}: Error finding container e84ecf9032fb57d2f417a8bee87747a0ec66d1a50be3b5ece2dc58f44d664602: Status 404 returned error can't find the container with id e84ecf9032fb57d2f417a8bee87747a0ec66d1a50be3b5ece2dc58f44d664602 Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.485992 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" event={"ID":"06b05942-626d-480f-bae3-80eafaef0fa5","Type":"ContainerStarted","Data":"44da66d175797211021b905806c31774c9a0b3d9aef1d8d7df2e1d6aa4bfba70"} Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.488034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b4c75486b-tlvk9" event={"ID":"07c162cb-aadc-4abf-a1f6-3875f813417d","Type":"ContainerStarted","Data":"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0"} Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.488085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b4c75486b-tlvk9" event={"ID":"07c162cb-aadc-4abf-a1f6-3875f813417d","Type":"ContainerStarted","Data":"e84ecf9032fb57d2f417a8bee87747a0ec66d1a50be3b5ece2dc58f44d664602"} Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.490955 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" event={"ID":"a0c35ce8-00e1-4421-9a89-a335e12d0d71","Type":"ContainerStarted","Data":"432d930b1bcb266be23611868b165083f743d6a59d47d9f36026694946f006d8"} Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.495212 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz"] Feb 16 21:51:51 crc kubenswrapper[4792]: I0216 21:51:51.523364 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b4c75486b-tlvk9" podStartSLOduration=1.5233416370000001 podStartE2EDuration="1.523341637s" podCreationTimestamp="2026-02-16 21:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:51:51.519808142 +0000 UTC m=+844.173087043" watchObservedRunningTime="2026-02-16 21:51:51.523341637 +0000 UTC m=+844.176620538" Feb 16 21:51:52 crc kubenswrapper[4792]: I0216 21:51:52.500909 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" event={"ID":"f641b77f-8af3-4104-80c3-e07504d086d1","Type":"ContainerStarted","Data":"ec36e07598bc79398225239d64216764162495ef9c46a6b9f5caa67bb77a8a1e"} Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.510078 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" event={"ID":"a0c35ce8-00e1-4421-9a89-a335e12d0d71","Type":"ContainerStarted","Data":"007474ebaf457d37d47feaf3cb37d4b0c1eb3c7baabea3e4cf19bb2f227d7bb0"} Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.510445 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.516100 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" event={"ID":"06b05942-626d-480f-bae3-80eafaef0fa5","Type":"ContainerStarted","Data":"71067eb017b23d7bacb371a9593358b787fc4e2730988433429efafb204c693b"} Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.517882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-llwc8" event={"ID":"dd045bc0-e27a-4fc1-808c-dd7aec8fce07","Type":"ContainerStarted","Data":"ae4992613457ae32c8d02bd4f5c0bdce3c96d9206bb1a92e46b02bb1444c9fe3"} Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.518041 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:51:53 crc kubenswrapper[4792]: I0216 21:51:53.533106 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" podStartSLOduration=2.726704275 podStartE2EDuration="4.533083426s" podCreationTimestamp="2026-02-16 21:51:49 +0000 UTC" firstStartedPulling="2026-02-16 21:51:51.109351012 +0000 UTC m=+843.762629903" lastFinishedPulling="2026-02-16 21:51:52.915730143 +0000 UTC m=+845.569009054" observedRunningTime="2026-02-16 21:51:53.527710371 +0000 UTC m=+846.180989272" watchObservedRunningTime="2026-02-16 21:51:53.533083426 +0000 UTC m=+846.186362317" Feb 16 21:51:54 crc kubenswrapper[4792]: I0216 21:51:54.526007 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" event={"ID":"f641b77f-8af3-4104-80c3-e07504d086d1","Type":"ContainerStarted","Data":"d0eaf24de43fdc114fafc5f31a75a17308076d97b38554ee5da867b1f61c9910"} Feb 16 21:51:54 crc kubenswrapper[4792]: I0216 21:51:54.543493 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-llwc8" podStartSLOduration=2.979043123 podStartE2EDuration="5.543474966s" podCreationTimestamp="2026-02-16 21:51:49 +0000 UTC" firstStartedPulling="2026-02-16 21:51:50.357274082 +0000 UTC m=+843.010552973" lastFinishedPulling="2026-02-16 21:51:52.921705925 +0000 UTC m=+845.574984816" observedRunningTime="2026-02-16 21:51:53.549562975 +0000 UTC m=+846.202841876" watchObservedRunningTime="2026-02-16 21:51:54.543474966 +0000 UTC m=+847.196753857" Feb 16 21:51:54 crc kubenswrapper[4792]: I0216 21:51:54.548698 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sdtfz" podStartSLOduration=2.11799576 podStartE2EDuration="4.548681188s" podCreationTimestamp="2026-02-16 21:51:50 +0000 UTC" firstStartedPulling="2026-02-16 21:51:51.504631388 +0000 UTC m=+844.157910279" lastFinishedPulling="2026-02-16 21:51:53.935316816 +0000 UTC m=+846.588595707" observedRunningTime="2026-02-16 21:51:54.540563412 +0000 UTC m=+847.193842303" watchObservedRunningTime="2026-02-16 21:51:54.548681188 +0000 UTC m=+847.201960079" Feb 16 21:51:56 crc kubenswrapper[4792]: I0216 21:51:56.549327 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" event={"ID":"06b05942-626d-480f-bae3-80eafaef0fa5","Type":"ContainerStarted","Data":"d544b7cf2350d27b988ddb056c7f78535a28187bc4fff5ff8dd6bf4e7b5ec8a9"} Feb 16 21:51:56 crc kubenswrapper[4792]: I0216 21:51:56.583871 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gdhtc" podStartSLOduration=2.971247653 podStartE2EDuration="7.583843572s" podCreationTimestamp="2026-02-16 21:51:49 +0000 UTC" firstStartedPulling="2026-02-16 21:51:50.777209869 +0000 UTC m=+843.430488760" lastFinishedPulling="2026-02-16 21:51:55.389805788 +0000 UTC m=+848.043084679" observedRunningTime="2026-02-16 21:51:56.570836432 +0000 UTC m=+849.224115353" watchObservedRunningTime="2026-02-16 21:51:56.583843572 +0000 UTC m=+849.237122473" Feb 16 21:52:00 crc kubenswrapper[4792]: I0216 21:52:00.332253 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-llwc8" Feb 16 21:52:00 crc kubenswrapper[4792]: I0216 21:52:00.647772 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:52:00 crc kubenswrapper[4792]: I0216 21:52:00.648134 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:52:00 crc kubenswrapper[4792]: I0216 21:52:00.654919 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:52:01 crc kubenswrapper[4792]: I0216 21:52:01.604814 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:52:01 crc kubenswrapper[4792]: I0216 21:52:01.691567 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:52:10 crc kubenswrapper[4792]: I0216 21:52:10.894289 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kk8rg" Feb 16 21:52:26 crc kubenswrapper[4792]: I0216 21:52:26.749114 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5fb8cfd5f8-fjn25" podUID="070b7637-8d35-4fd2-82a5-91b32097015b" containerName="console" containerID="cri-o://99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06" gracePeriod=15 Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.163233 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5fb8cfd5f8-fjn25_070b7637-8d35-4fd2-82a5-91b32097015b/console/0.log" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.163710 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264311 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264416 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264480 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264719 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r254\" (UniqueName: \"kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264874 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.264920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert\") pod \"070b7637-8d35-4fd2-82a5-91b32097015b\" (UID: \"070b7637-8d35-4fd2-82a5-91b32097015b\") " Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.265395 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.265442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config" (OuterVolumeSpecName: "console-config") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.265649 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.266004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.266692 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca" (OuterVolumeSpecName: "service-ca") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.280296 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.282567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:52:27 crc kubenswrapper[4792]: I0216 21:52:27.287488 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254" (OuterVolumeSpecName: "kube-api-access-4r254") pod "070b7637-8d35-4fd2-82a5-91b32097015b" (UID: "070b7637-8d35-4fd2-82a5-91b32097015b"). InnerVolumeSpecName "kube-api-access-4r254". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106350 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r254\" (UniqueName: \"kubernetes.io/projected/070b7637-8d35-4fd2-82a5-91b32097015b-kube-api-access-4r254\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106389 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106402 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/070b7637-8d35-4fd2-82a5-91b32097015b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106414 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106426 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.106438 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/070b7637-8d35-4fd2-82a5-91b32097015b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155584 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5fb8cfd5f8-fjn25_070b7637-8d35-4fd2-82a5-91b32097015b/console/0.log" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155655 4792 generic.go:334] "Generic (PLEG): container finished" podID="070b7637-8d35-4fd2-82a5-91b32097015b" containerID="99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06" exitCode=2 Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155688 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fb8cfd5f8-fjn25" event={"ID":"070b7637-8d35-4fd2-82a5-91b32097015b","Type":"ContainerDied","Data":"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06"} Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fb8cfd5f8-fjn25" event={"ID":"070b7637-8d35-4fd2-82a5-91b32097015b","Type":"ContainerDied","Data":"071b6d3c8c9987844894df26a1b6b0cd87f20615bd20c1e791f6480979f1f562"} Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155736 4792 scope.go:117] "RemoveContainer" containerID="99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.155887 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fb8cfd5f8-fjn25" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.177556 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.185937 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5fb8cfd5f8-fjn25"] Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.189149 4792 scope.go:117] "RemoveContainer" containerID="99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06" Feb 16 21:52:28 crc kubenswrapper[4792]: E0216 21:52:28.192916 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06\": container with ID starting with 99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06 not found: ID does not exist" containerID="99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06" Feb 16 21:52:28 crc kubenswrapper[4792]: I0216 21:52:28.192973 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06"} err="failed to get container status \"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06\": rpc error: code = NotFound desc = could not find container \"99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06\": container with ID starting with 99b8159057ec7796eeccd016846e672c4967551f9fee8cf9008b300d7848bc06 not found: ID does not exist" Feb 16 21:52:28 crc kubenswrapper[4792]: E0216 21:52:28.300978 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070b7637_8d35_4fd2_82a5_91b32097015b.slice/crio-071b6d3c8c9987844894df26a1b6b0cd87f20615bd20c1e791f6480979f1f562\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070b7637_8d35_4fd2_82a5_91b32097015b.slice\": RecentStats: unable to find data in memory cache]" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.034859 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070b7637-8d35-4fd2-82a5-91b32097015b" path="/var/lib/kubelet/pods/070b7637-8d35-4fd2-82a5-91b32097015b/volumes" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.214621 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm"] Feb 16 21:52:30 crc kubenswrapper[4792]: E0216 21:52:30.214947 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070b7637-8d35-4fd2-82a5-91b32097015b" containerName="console" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.214967 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="070b7637-8d35-4fd2-82a5-91b32097015b" containerName="console" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.215162 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="070b7637-8d35-4fd2-82a5-91b32097015b" containerName="console" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.216435 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.222017 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm"] Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.250197 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.352726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.353050 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csq46\" (UniqueName: \"kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.353216 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.454291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.454384 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.454478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csq46\" (UniqueName: \"kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.454859 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.454871 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.484324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csq46\" (UniqueName: \"kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.563818 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:30 crc kubenswrapper[4792]: I0216 21:52:30.978246 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm"] Feb 16 21:52:31 crc kubenswrapper[4792]: I0216 21:52:31.177087 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerStarted","Data":"ef0346e96a183cd2e506fe4703cbe5a4f0323a39cd8a14df49b1af436d2760cc"} Feb 16 21:52:31 crc kubenswrapper[4792]: I0216 21:52:31.177131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerStarted","Data":"61edc6c36ff3e28311afe267fac509e48107d86bd31e4fba170b56ceb25e5eee"} Feb 16 21:52:31 crc kubenswrapper[4792]: I0216 21:52:31.532905 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:52:31 crc kubenswrapper[4792]: I0216 21:52:31.532967 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:52:32 crc kubenswrapper[4792]: I0216 21:52:32.186321 4792 generic.go:334] "Generic (PLEG): container finished" podID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerID="ef0346e96a183cd2e506fe4703cbe5a4f0323a39cd8a14df49b1af436d2760cc" exitCode=0 Feb 16 21:52:32 crc kubenswrapper[4792]: I0216 21:52:32.186378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerDied","Data":"ef0346e96a183cd2e506fe4703cbe5a4f0323a39cd8a14df49b1af436d2760cc"} Feb 16 21:52:32 crc kubenswrapper[4792]: I0216 21:52:32.188541 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:52:34 crc kubenswrapper[4792]: I0216 21:52:34.202774 4792 generic.go:334] "Generic (PLEG): container finished" podID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerID="80b094465215ec3401fabc5268bfd78f0c364fa4539e3d7f75baca0201191455" exitCode=0 Feb 16 21:52:34 crc kubenswrapper[4792]: I0216 21:52:34.202895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerDied","Data":"80b094465215ec3401fabc5268bfd78f0c364fa4539e3d7f75baca0201191455"} Feb 16 21:52:35 crc kubenswrapper[4792]: I0216 21:52:35.234533 4792 generic.go:334] "Generic (PLEG): container finished" podID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerID="9f220566ef7e43da30bbfeb9c5549b60a670dd6eeaf57071a336f894c39c6e6b" exitCode=0 Feb 16 21:52:35 crc kubenswrapper[4792]: I0216 21:52:35.235072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerDied","Data":"9f220566ef7e43da30bbfeb9c5549b60a670dd6eeaf57071a336f894c39c6e6b"} Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.579333 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.662444 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csq46\" (UniqueName: \"kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46\") pod \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.662531 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util\") pod \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.662554 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle\") pod \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\" (UID: \"6f4d19e1-687e-44c3-928f-bda7f0b893f9\") " Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.663715 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle" (OuterVolumeSpecName: "bundle") pod "6f4d19e1-687e-44c3-928f-bda7f0b893f9" (UID: "6f4d19e1-687e-44c3-928f-bda7f0b893f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.669566 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46" (OuterVolumeSpecName: "kube-api-access-csq46") pod "6f4d19e1-687e-44c3-928f-bda7f0b893f9" (UID: "6f4d19e1-687e-44c3-928f-bda7f0b893f9"). InnerVolumeSpecName "kube-api-access-csq46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.674626 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util" (OuterVolumeSpecName: "util") pod "6f4d19e1-687e-44c3-928f-bda7f0b893f9" (UID: "6f4d19e1-687e-44c3-928f-bda7f0b893f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.763930 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csq46\" (UniqueName: \"kubernetes.io/projected/6f4d19e1-687e-44c3-928f-bda7f0b893f9-kube-api-access-csq46\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.763971 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:36 crc kubenswrapper[4792]: I0216 21:52:36.763984 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f4d19e1-687e-44c3-928f-bda7f0b893f9-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:52:37 crc kubenswrapper[4792]: I0216 21:52:37.256752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" event={"ID":"6f4d19e1-687e-44c3-928f-bda7f0b893f9","Type":"ContainerDied","Data":"61edc6c36ff3e28311afe267fac509e48107d86bd31e4fba170b56ceb25e5eee"} Feb 16 21:52:37 crc kubenswrapper[4792]: I0216 21:52:37.256799 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61edc6c36ff3e28311afe267fac509e48107d86bd31e4fba170b56ceb25e5eee" Feb 16 21:52:37 crc kubenswrapper[4792]: I0216 21:52:37.257042 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.145884 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q"] Feb 16 21:52:45 crc kubenswrapper[4792]: E0216 21:52:45.146790 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="util" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.146807 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="util" Feb 16 21:52:45 crc kubenswrapper[4792]: E0216 21:52:45.146818 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="extract" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.146826 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="extract" Feb 16 21:52:45 crc kubenswrapper[4792]: E0216 21:52:45.146842 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="pull" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.146852 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="pull" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.147009 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4d19e1-687e-44c3-928f-bda7f0b893f9" containerName="extract" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.147661 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.150709 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.150857 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.150919 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x89v5" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.152512 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.152647 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.167522 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q"] Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.196352 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-apiservice-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.196448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4fmx\" (UniqueName: \"kubernetes.io/projected/a4687638-a268-4abd-afdd-3c7d7b257113-kube-api-access-w4fmx\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.196483 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-webhook-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.297833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4fmx\" (UniqueName: \"kubernetes.io/projected/a4687638-a268-4abd-afdd-3c7d7b257113-kube-api-access-w4fmx\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.297891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-webhook-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.297979 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-apiservice-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.303312 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-apiservice-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.303320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4687638-a268-4abd-afdd-3c7d7b257113-webhook-cert\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.315400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4fmx\" (UniqueName: \"kubernetes.io/projected/a4687638-a268-4abd-afdd-3c7d7b257113-kube-api-access-w4fmx\") pod \"metallb-operator-controller-manager-b99cc5488-gwb5q\" (UID: \"a4687638-a268-4abd-afdd-3c7d7b257113\") " pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.398897 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-678488bb86-zks4j"] Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.399986 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.403284 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.403383 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tnljb" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.408494 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.413109 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-678488bb86-zks4j"] Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.464655 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.501443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-webhook-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.501528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-apiservice-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.501928 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrbg\" (UniqueName: \"kubernetes.io/projected/2a4c6ce4-d81d-460a-a14e-0701afe8957f-kube-api-access-fxrbg\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.603439 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-apiservice-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.603791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrbg\" (UniqueName: \"kubernetes.io/projected/2a4c6ce4-d81d-460a-a14e-0701afe8957f-kube-api-access-fxrbg\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.603821 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-webhook-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.608455 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-apiservice-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.610487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a4c6ce4-d81d-460a-a14e-0701afe8957f-webhook-cert\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.627684 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrbg\" (UniqueName: \"kubernetes.io/projected/2a4c6ce4-d81d-460a-a14e-0701afe8957f-kube-api-access-fxrbg\") pod \"metallb-operator-webhook-server-678488bb86-zks4j\" (UID: \"2a4c6ce4-d81d-460a-a14e-0701afe8957f\") " pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:45 crc kubenswrapper[4792]: I0216 21:52:45.714778 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:46 crc kubenswrapper[4792]: I0216 21:52:46.012974 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q"] Feb 16 21:52:46 crc kubenswrapper[4792]: W0216 21:52:46.020820 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4687638_a268_4abd_afdd_3c7d7b257113.slice/crio-92c4d353564049cfede8108b52e2cc825892f9d5f1e05dea077497c23c9fc4a5 WatchSource:0}: Error finding container 92c4d353564049cfede8108b52e2cc825892f9d5f1e05dea077497c23c9fc4a5: Status 404 returned error can't find the container with id 92c4d353564049cfede8108b52e2cc825892f9d5f1e05dea077497c23c9fc4a5 Feb 16 21:52:46 crc kubenswrapper[4792]: I0216 21:52:46.194616 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-678488bb86-zks4j"] Feb 16 21:52:46 crc kubenswrapper[4792]: W0216 21:52:46.197834 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4c6ce4_d81d_460a_a14e_0701afe8957f.slice/crio-301219e82941033aeaf0622e16e7463e22728c0892aba181a260a75d1cf85b13 WatchSource:0}: Error finding container 301219e82941033aeaf0622e16e7463e22728c0892aba181a260a75d1cf85b13: Status 404 returned error can't find the container with id 301219e82941033aeaf0622e16e7463e22728c0892aba181a260a75d1cf85b13 Feb 16 21:52:46 crc kubenswrapper[4792]: I0216 21:52:46.323186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" event={"ID":"2a4c6ce4-d81d-460a-a14e-0701afe8957f","Type":"ContainerStarted","Data":"301219e82941033aeaf0622e16e7463e22728c0892aba181a260a75d1cf85b13"} Feb 16 21:52:46 crc kubenswrapper[4792]: I0216 21:52:46.323894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" event={"ID":"a4687638-a268-4abd-afdd-3c7d7b257113","Type":"ContainerStarted","Data":"92c4d353564049cfede8108b52e2cc825892f9d5f1e05dea077497c23c9fc4a5"} Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.385408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" event={"ID":"2a4c6ce4-d81d-460a-a14e-0701afe8957f","Type":"ContainerStarted","Data":"221d891ea511a4271168d42354ee2ad4cf9bdc4b38a16c347de1a0da74a53dbd"} Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.385911 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.387312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" event={"ID":"a4687638-a268-4abd-afdd-3c7d7b257113","Type":"ContainerStarted","Data":"dc607a7d79dab11ab8e61caabfece8ef9577b03a7f64b55f8a564454bb998a16"} Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.387461 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.407863 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" podStartSLOduration=1.714970243 podStartE2EDuration="6.407847037s" podCreationTimestamp="2026-02-16 21:52:45 +0000 UTC" firstStartedPulling="2026-02-16 21:52:46.201031126 +0000 UTC m=+898.854310017" lastFinishedPulling="2026-02-16 21:52:50.89390792 +0000 UTC m=+903.547186811" observedRunningTime="2026-02-16 21:52:51.40754775 +0000 UTC m=+904.060826641" watchObservedRunningTime="2026-02-16 21:52:51.407847037 +0000 UTC m=+904.061125928" Feb 16 21:52:51 crc kubenswrapper[4792]: I0216 21:52:51.440211 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" podStartSLOduration=1.604178587 podStartE2EDuration="6.440197709s" podCreationTimestamp="2026-02-16 21:52:45 +0000 UTC" firstStartedPulling="2026-02-16 21:52:46.022959761 +0000 UTC m=+898.676238662" lastFinishedPulling="2026-02-16 21:52:50.858978893 +0000 UTC m=+903.512257784" observedRunningTime="2026-02-16 21:52:51.436744502 +0000 UTC m=+904.090023393" watchObservedRunningTime="2026-02-16 21:52:51.440197709 +0000 UTC m=+904.093476600" Feb 16 21:53:01 crc kubenswrapper[4792]: I0216 21:53:01.532384 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:53:01 crc kubenswrapper[4792]: I0216 21:53:01.532835 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:53:05 crc kubenswrapper[4792]: I0216 21:53:05.734487 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-678488bb86-zks4j" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.729139 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.733922 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.754513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.833434 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.833554 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.833779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gpn\" (UniqueName: \"kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.934758 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gpn\" (UniqueName: \"kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.934814 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.934866 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.936136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.936130 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:17 crc kubenswrapper[4792]: I0216 21:53:17.962816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gpn\" (UniqueName: \"kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn\") pod \"community-operators-jhj58\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:18 crc kubenswrapper[4792]: I0216 21:53:18.060744 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:18 crc kubenswrapper[4792]: I0216 21:53:18.542511 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:18 crc kubenswrapper[4792]: I0216 21:53:18.617225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerStarted","Data":"eac688b6d442f3c426b782b35afc83c156c1828f600029d13c6840184e7a2196"} Feb 16 21:53:19 crc kubenswrapper[4792]: I0216 21:53:19.625819 4792 generic.go:334] "Generic (PLEG): container finished" podID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerID="cfbf3978de8f1212662194c38b3accbe73405d3498de91072fffc0c349dddf76" exitCode=0 Feb 16 21:53:19 crc kubenswrapper[4792]: I0216 21:53:19.625921 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerDied","Data":"cfbf3978de8f1212662194c38b3accbe73405d3498de91072fffc0c349dddf76"} Feb 16 21:53:20 crc kubenswrapper[4792]: I0216 21:53:20.634935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerStarted","Data":"dd5c98c76fc58176fd9c3a0242e67f51322a3d185e462da26548791787806989"} Feb 16 21:53:21 crc kubenswrapper[4792]: I0216 21:53:21.641627 4792 generic.go:334] "Generic (PLEG): container finished" podID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerID="dd5c98c76fc58176fd9c3a0242e67f51322a3d185e462da26548791787806989" exitCode=0 Feb 16 21:53:21 crc kubenswrapper[4792]: I0216 21:53:21.641673 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerDied","Data":"dd5c98c76fc58176fd9c3a0242e67f51322a3d185e462da26548791787806989"} Feb 16 21:53:22 crc kubenswrapper[4792]: I0216 21:53:22.657678 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerStarted","Data":"18c96821aa3b21c641e6fe0defc5d3c00607bf3307bfdc142c501ba76ee9fd45"} Feb 16 21:53:22 crc kubenswrapper[4792]: I0216 21:53:22.691295 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhj58" podStartSLOduration=3.051885103 podStartE2EDuration="5.691275362s" podCreationTimestamp="2026-02-16 21:53:17 +0000 UTC" firstStartedPulling="2026-02-16 21:53:19.627875675 +0000 UTC m=+932.281154556" lastFinishedPulling="2026-02-16 21:53:22.267265924 +0000 UTC m=+934.920544815" observedRunningTime="2026-02-16 21:53:22.686986958 +0000 UTC m=+935.340265859" watchObservedRunningTime="2026-02-16 21:53:22.691275362 +0000 UTC m=+935.344554253" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.531285 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.533172 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.545587 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.638975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9h6s\" (UniqueName: \"kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.639033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.639167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.740455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.740608 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9h6s\" (UniqueName: \"kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.740645 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.740959 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.741099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.760530 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9h6s\" (UniqueName: \"kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s\") pod \"redhat-marketplace-fgjxf\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:24 crc kubenswrapper[4792]: I0216 21:53:24.855224 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:25 crc kubenswrapper[4792]: I0216 21:53:25.314993 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:25 crc kubenswrapper[4792]: W0216 21:53:25.329797 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e87d086_6c61_4dd8_9fce_3eb16952403f.slice/crio-5bc55f7002a030eb3cafc18be029baad2447f348d8c8d37971579568e72e4333 WatchSource:0}: Error finding container 5bc55f7002a030eb3cafc18be029baad2447f348d8c8d37971579568e72e4333: Status 404 returned error can't find the container with id 5bc55f7002a030eb3cafc18be029baad2447f348d8c8d37971579568e72e4333 Feb 16 21:53:25 crc kubenswrapper[4792]: I0216 21:53:25.468048 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b99cc5488-gwb5q" Feb 16 21:53:25 crc kubenswrapper[4792]: I0216 21:53:25.678078 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerID="5b3c38f7b6d4a58eb7b81d14aecc23633476a9e916a79cb7dfa50fe2ded5ba20" exitCode=0 Feb 16 21:53:25 crc kubenswrapper[4792]: I0216 21:53:25.678135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerDied","Data":"5b3c38f7b6d4a58eb7b81d14aecc23633476a9e916a79cb7dfa50fe2ded5ba20"} Feb 16 21:53:25 crc kubenswrapper[4792]: I0216 21:53:25.678160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerStarted","Data":"5bc55f7002a030eb3cafc18be029baad2447f348d8c8d37971579568e72e4333"} Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.250260 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-s7hh8"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.254843 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.264546 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fw968" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.264822 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.264976 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.265728 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.266652 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.268797 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.299429 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.354893 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8bvkf"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.356024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.359174 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.359620 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7q8tc" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.359891 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.360148 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.365964 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-pst5t"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.367131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.368277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx72z\" (UniqueName: \"kubernetes.io/projected/de99e45c-01de-43eb-84bb-a601f9242155-kube-api-access-bx72z\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.368547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-pst5t"] Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.373690 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.374924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de99e45c-01de-43eb-84bb-a601f9242155-frr-startup\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375205 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12e75d7-4541-4024-b589-eb6cd86c6d18-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-sockets\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375343 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvmq\" (UniqueName: \"kubernetes.io/projected/f12e75d7-4541-4024-b589-eb6cd86c6d18-kube-api-access-rdvmq\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-conf\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-metrics\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-reloader\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.375650 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de99e45c-01de-43eb-84bb-a601f9242155-metrics-certs\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477620 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8a21d7f-64c4-4182-9950-4ab70399f312-metallb-excludel2\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477647 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-metrics\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-reloader\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477706 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-cert\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477727 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697g4\" (UniqueName: \"kubernetes.io/projected/f8a21d7f-64c4-4182-9950-4ab70399f312-kube-api-access-697g4\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de99e45c-01de-43eb-84bb-a601f9242155-metrics-certs\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx72z\" (UniqueName: \"kubernetes.io/projected/de99e45c-01de-43eb-84bb-a601f9242155-kube-api-access-bx72z\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477803 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de99e45c-01de-43eb-84bb-a601f9242155-frr-startup\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvh8x\" (UniqueName: \"kubernetes.io/projected/2c5546f7-52f2-453d-8979-ce4ccd26c165-kube-api-access-tvh8x\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477845 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-metrics-certs\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477871 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12e75d7-4541-4024-b589-eb6cd86c6d18-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-metrics-certs\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477926 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-sockets\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477945 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvmq\" (UniqueName: \"kubernetes.io/projected/f12e75d7-4541-4024-b589-eb6cd86c6d18-kube-api-access-rdvmq\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.477963 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-conf\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.478279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-reloader\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.478984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-conf\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.479036 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-metrics\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.479204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de99e45c-01de-43eb-84bb-a601f9242155-frr-sockets\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.479723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de99e45c-01de-43eb-84bb-a601f9242155-frr-startup\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.487462 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12e75d7-4541-4024-b589-eb6cd86c6d18-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.488465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de99e45c-01de-43eb-84bb-a601f9242155-metrics-certs\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.494887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx72z\" (UniqueName: \"kubernetes.io/projected/de99e45c-01de-43eb-84bb-a601f9242155-kube-api-access-bx72z\") pod \"frr-k8s-s7hh8\" (UID: \"de99e45c-01de-43eb-84bb-a601f9242155\") " pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.496231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvmq\" (UniqueName: \"kubernetes.io/projected/f12e75d7-4541-4024-b589-eb6cd86c6d18-kube-api-access-rdvmq\") pod \"frr-k8s-webhook-server-78b44bf5bb-zkb5q\" (UID: \"f12e75d7-4541-4024-b589-eb6cd86c6d18\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579660 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-697g4\" (UniqueName: \"kubernetes.io/projected/f8a21d7f-64c4-4182-9950-4ab70399f312-kube-api-access-697g4\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvh8x\" (UniqueName: \"kubernetes.io/projected/2c5546f7-52f2-453d-8979-ce4ccd26c165-kube-api-access-tvh8x\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-metrics-certs\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-metrics-certs\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579839 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8a21d7f-64c4-4182-9950-4ab70399f312-metallb-excludel2\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.579895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-cert\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.581169 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:26 crc kubenswrapper[4792]: E0216 21:53:26.581360 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:53:26 crc kubenswrapper[4792]: E0216 21:53:26.581461 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist podName:f8a21d7f-64c4-4182-9950-4ab70399f312 nodeName:}" failed. No retries permitted until 2026-02-16 21:53:27.081437238 +0000 UTC m=+939.734716129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist") pod "speaker-8bvkf" (UID: "f8a21d7f-64c4-4182-9950-4ab70399f312") : secret "metallb-memberlist" not found Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.581738 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8a21d7f-64c4-4182-9950-4ab70399f312-metallb-excludel2\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.583081 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.584843 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-metrics-certs\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.587526 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.593526 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-cert\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.593771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5546f7-52f2-453d-8979-ce4ccd26c165-metrics-certs\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.602395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvh8x\" (UniqueName: \"kubernetes.io/projected/2c5546f7-52f2-453d-8979-ce4ccd26c165-kube-api-access-tvh8x\") pod \"controller-69bbfbf88f-pst5t\" (UID: \"2c5546f7-52f2-453d-8979-ce4ccd26c165\") " pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.604993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-697g4\" (UniqueName: \"kubernetes.io/projected/f8a21d7f-64c4-4182-9950-4ab70399f312-kube-api-access-697g4\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.695759 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:26 crc kubenswrapper[4792]: I0216 21:53:26.695882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerStarted","Data":"898c3536fa85ddf178972f21a8ef93e3ed6b8a36d5666654d5317b1d975b7be4"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.076504 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q"] Feb 16 21:53:27 crc kubenswrapper[4792]: W0216 21:53:27.078052 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12e75d7_4541_4024_b589_eb6cd86c6d18.slice/crio-4a9beb2a1e0f891aa57f7eef47b32cd2b052d740d1afb937a3cfbc80fbb0586b WatchSource:0}: Error finding container 4a9beb2a1e0f891aa57f7eef47b32cd2b052d740d1afb937a3cfbc80fbb0586b: Status 404 returned error can't find the container with id 4a9beb2a1e0f891aa57f7eef47b32cd2b052d740d1afb937a3cfbc80fbb0586b Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.089576 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:27 crc kubenswrapper[4792]: E0216 21:53:27.089742 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:53:27 crc kubenswrapper[4792]: E0216 21:53:27.089827 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist podName:f8a21d7f-64c4-4182-9950-4ab70399f312 nodeName:}" failed. No retries permitted until 2026-02-16 21:53:28.089808417 +0000 UTC m=+940.743087328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist") pod "speaker-8bvkf" (UID: "f8a21d7f-64c4-4182-9950-4ab70399f312") : secret "metallb-memberlist" not found Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.158163 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-pst5t"] Feb 16 21:53:27 crc kubenswrapper[4792]: W0216 21:53:27.160568 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c5546f7_52f2_453d_8979_ce4ccd26c165.slice/crio-c462f56157abfa509f96173289ca4acc547fdd76f0b98702c1776d2ad4bdd231 WatchSource:0}: Error finding container c462f56157abfa509f96173289ca4acc547fdd76f0b98702c1776d2ad4bdd231: Status 404 returned error can't find the container with id c462f56157abfa509f96173289ca4acc547fdd76f0b98702c1776d2ad4bdd231 Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.710166 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerID="898c3536fa85ddf178972f21a8ef93e3ed6b8a36d5666654d5317b1d975b7be4" exitCode=0 Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.710659 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerDied","Data":"898c3536fa85ddf178972f21a8ef93e3ed6b8a36d5666654d5317b1d975b7be4"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.717441 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" event={"ID":"f12e75d7-4541-4024-b589-eb6cd86c6d18","Type":"ContainerStarted","Data":"4a9beb2a1e0f891aa57f7eef47b32cd2b052d740d1afb937a3cfbc80fbb0586b"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.720433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pst5t" event={"ID":"2c5546f7-52f2-453d-8979-ce4ccd26c165","Type":"ContainerStarted","Data":"b0b597efb332256a92ca762a7a993ab78d95b69fe1be89ec234e086b272521fe"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.720475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pst5t" event={"ID":"2c5546f7-52f2-453d-8979-ce4ccd26c165","Type":"ContainerStarted","Data":"d5dae7c29bbb6e57334f574421d7b886efeb98024be0c1805859df999fa88efb"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.720491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-pst5t" event={"ID":"2c5546f7-52f2-453d-8979-ce4ccd26c165","Type":"ContainerStarted","Data":"c462f56157abfa509f96173289ca4acc547fdd76f0b98702c1776d2ad4bdd231"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.720838 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.722149 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"00216c9964d2a3e1681cf372ed06991b6edfec52d01c564a2f97cb0f9d00d473"} Feb 16 21:53:27 crc kubenswrapper[4792]: I0216 21:53:27.768147 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-pst5t" podStartSLOduration=1.768120448 podStartE2EDuration="1.768120448s" podCreationTimestamp="2026-02-16 21:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:53:27.766380012 +0000 UTC m=+940.419658933" watchObservedRunningTime="2026-02-16 21:53:27.768120448 +0000 UTC m=+940.421399379" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.062446 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.062916 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.112444 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.133032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8a21d7f-64c4-4182-9950-4ab70399f312-memberlist\") pod \"speaker-8bvkf\" (UID: \"f8a21d7f-64c4-4182-9950-4ab70399f312\") " pod="metallb-system/speaker-8bvkf" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.185817 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.188150 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8bvkf" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.748092 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerStarted","Data":"e78f174a2c449fc308154c7b485cbd74a8b28563e9373a6616069cff6a699b16"} Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.754072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8bvkf" event={"ID":"f8a21d7f-64c4-4182-9950-4ab70399f312","Type":"ContainerStarted","Data":"a2b00806b5207872db909a901d7a6dc9cfb5fcf027631d1e3f7a42dc8620e9d3"} Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.754098 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8bvkf" event={"ID":"f8a21d7f-64c4-4182-9950-4ab70399f312","Type":"ContainerStarted","Data":"e3e57420718a438176e4b82cd3e85522391fb94b459df4bdb372009f0d4ef87b"} Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.775969 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgjxf" podStartSLOduration=2.3072073619999998 podStartE2EDuration="4.775951988s" podCreationTimestamp="2026-02-16 21:53:24 +0000 UTC" firstStartedPulling="2026-02-16 21:53:25.679329581 +0000 UTC m=+938.332608472" lastFinishedPulling="2026-02-16 21:53:28.148074207 +0000 UTC m=+940.801353098" observedRunningTime="2026-02-16 21:53:28.774331014 +0000 UTC m=+941.427609905" watchObservedRunningTime="2026-02-16 21:53:28.775951988 +0000 UTC m=+941.429230879" Feb 16 21:53:28 crc kubenswrapper[4792]: I0216 21:53:28.841311 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:29 crc kubenswrapper[4792]: I0216 21:53:29.781658 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8bvkf" event={"ID":"f8a21d7f-64c4-4182-9950-4ab70399f312","Type":"ContainerStarted","Data":"a900d2754edda0c51226aead01a2cd05cc3d0aabac6d29ccdd1edf512753e640"} Feb 16 21:53:29 crc kubenswrapper[4792]: I0216 21:53:29.782213 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8bvkf" Feb 16 21:53:29 crc kubenswrapper[4792]: I0216 21:53:29.814272 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8bvkf" podStartSLOduration=3.8142478730000002 podStartE2EDuration="3.814247873s" podCreationTimestamp="2026-02-16 21:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:53:29.801896962 +0000 UTC m=+942.455175873" watchObservedRunningTime="2026-02-16 21:53:29.814247873 +0000 UTC m=+942.467526774" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.139144 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.142350 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.150040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.264615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.265308 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwprc\" (UniqueName: \"kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.265364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.368045 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.368571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.368802 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.368912 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwprc\" (UniqueName: \"kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.369071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.394565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwprc\" (UniqueName: \"kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc\") pod \"certified-operators-tb8dq\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.473810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.532452 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.532538 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.532590 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.533386 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:53:31 crc kubenswrapper[4792]: I0216 21:53:31.533453 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7" gracePeriod=600 Feb 16 21:53:32 crc kubenswrapper[4792]: I0216 21:53:32.045257 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:32 crc kubenswrapper[4792]: I0216 21:53:32.826159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerStarted","Data":"d88b5a6abf7cd13e21e21955e73e1b628a1f2fea32bd97fbf7f57f01a54a68ce"} Feb 16 21:53:32 crc kubenswrapper[4792]: I0216 21:53:32.838908 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7" exitCode=0 Feb 16 21:53:32 crc kubenswrapper[4792]: I0216 21:53:32.838958 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7"} Feb 16 21:53:32 crc kubenswrapper[4792]: I0216 21:53:32.838991 4792 scope.go:117] "RemoveContainer" containerID="e0d874e70735a6bee795bdff7c886fc474741c00e0f4ef5e56c9d7cde9efb6b2" Feb 16 21:53:33 crc kubenswrapper[4792]: I0216 21:53:33.860645 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerID="c280d091a4658ab7da672d0c31fc7b946de9ab25da79f21f6fc2d302e0ca3ee0" exitCode=0 Feb 16 21:53:33 crc kubenswrapper[4792]: I0216 21:53:33.860967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerDied","Data":"c280d091a4658ab7da672d0c31fc7b946de9ab25da79f21f6fc2d302e0ca3ee0"} Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.524694 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.524939 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jhj58" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="registry-server" containerID="cri-o://18c96821aa3b21c641e6fe0defc5d3c00607bf3307bfdc142c501ba76ee9fd45" gracePeriod=2 Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.856157 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.856203 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.873344 4792 generic.go:334] "Generic (PLEG): container finished" podID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerID="18c96821aa3b21c641e6fe0defc5d3c00607bf3307bfdc142c501ba76ee9fd45" exitCode=0 Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.873390 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerDied","Data":"18c96821aa3b21c641e6fe0defc5d3c00607bf3307bfdc142c501ba76ee9fd45"} Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.909817 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:34 crc kubenswrapper[4792]: I0216 21:53:34.990164 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.536920 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.649639 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities\") pod \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.649708 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content\") pod \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.649802 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5gpn\" (UniqueName: \"kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn\") pod \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\" (UID: \"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f\") " Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.650686 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities" (OuterVolumeSpecName: "utilities") pod "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" (UID: "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.655412 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn" (OuterVolumeSpecName: "kube-api-access-w5gpn") pod "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" (UID: "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f"). InnerVolumeSpecName "kube-api-access-w5gpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.696730 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" (UID: "7902e27b-8b2d-4768-9930-4dfe6e0ffa4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.752181 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.752221 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.752232 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5gpn\" (UniqueName: \"kubernetes.io/projected/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f-kube-api-access-w5gpn\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.882552 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerStarted","Data":"4f28246603029189bce6382e1627d81a19bd18e1257b2e330e5a2f2c37267bde"} Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.885240 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9"} Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.886880 4792 generic.go:334] "Generic (PLEG): container finished" podID="de99e45c-01de-43eb-84bb-a601f9242155" containerID="9c7ab62e0c700ddc8d2d6ad878d0004c4564166bc0a44fa2ad0b9bf96ec13e58" exitCode=0 Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.887009 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerDied","Data":"9c7ab62e0c700ddc8d2d6ad878d0004c4564166bc0a44fa2ad0b9bf96ec13e58"} Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.890223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhj58" event={"ID":"7902e27b-8b2d-4768-9930-4dfe6e0ffa4f","Type":"ContainerDied","Data":"eac688b6d442f3c426b782b35afc83c156c1828f600029d13c6840184e7a2196"} Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.890265 4792 scope.go:117] "RemoveContainer" containerID="18c96821aa3b21c641e6fe0defc5d3c00607bf3307bfdc142c501ba76ee9fd45" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.890481 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhj58" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.893277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" event={"ID":"f12e75d7-4541-4024-b589-eb6cd86c6d18","Type":"ContainerStarted","Data":"60d0467be49718c509b24a3e44d58f688decf6100f975446505affeb2d782080"} Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.893354 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.911017 4792 scope.go:117] "RemoveContainer" containerID="dd5c98c76fc58176fd9c3a0242e67f51322a3d185e462da26548791787806989" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.925813 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" podStartSLOduration=1.6910602049999999 podStartE2EDuration="9.925796978s" podCreationTimestamp="2026-02-16 21:53:26 +0000 UTC" firstStartedPulling="2026-02-16 21:53:27.080285682 +0000 UTC m=+939.733564603" lastFinishedPulling="2026-02-16 21:53:35.315022485 +0000 UTC m=+947.968301376" observedRunningTime="2026-02-16 21:53:35.921010809 +0000 UTC m=+948.574289700" watchObservedRunningTime="2026-02-16 21:53:35.925796978 +0000 UTC m=+948.579075859" Feb 16 21:53:35 crc kubenswrapper[4792]: I0216 21:53:35.957151 4792 scope.go:117] "RemoveContainer" containerID="cfbf3978de8f1212662194c38b3accbe73405d3498de91072fffc0c349dddf76" Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.003712 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.010436 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jhj58"] Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.038129 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" path="/var/lib/kubelet/pods/7902e27b-8b2d-4768-9930-4dfe6e0ffa4f/volumes" Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.900974 4792 generic.go:334] "Generic (PLEG): container finished" podID="de99e45c-01de-43eb-84bb-a601f9242155" containerID="ae12cd6435f77f83a60bfa39bc3768feb4396b979286e05f75bdaaa184533de0" exitCode=0 Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.901091 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerDied","Data":"ae12cd6435f77f83a60bfa39bc3768feb4396b979286e05f75bdaaa184533de0"} Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.904381 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerID="4f28246603029189bce6382e1627d81a19bd18e1257b2e330e5a2f2c37267bde" exitCode=0 Feb 16 21:53:36 crc kubenswrapper[4792]: I0216 21:53:36.904509 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerDied","Data":"4f28246603029189bce6382e1627d81a19bd18e1257b2e330e5a2f2c37267bde"} Feb 16 21:53:37 crc kubenswrapper[4792]: I0216 21:53:37.913270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerStarted","Data":"73e9e687718d524d7944448415f0ebd3c72db4cb7e6b2e6373d6d5c70b2678ba"} Feb 16 21:53:37 crc kubenswrapper[4792]: I0216 21:53:37.916269 4792 generic.go:334] "Generic (PLEG): container finished" podID="de99e45c-01de-43eb-84bb-a601f9242155" containerID="a1be2ee126a5b0d5b4bccc1a4e4d8dc37b9c465dc0c266b359d791c19449fda5" exitCode=0 Feb 16 21:53:37 crc kubenswrapper[4792]: I0216 21:53:37.916314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerDied","Data":"a1be2ee126a5b0d5b4bccc1a4e4d8dc37b9c465dc0c266b359d791c19449fda5"} Feb 16 21:53:37 crc kubenswrapper[4792]: I0216 21:53:37.941516 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tb8dq" podStartSLOduration=4.787481362 podStartE2EDuration="6.941495967s" podCreationTimestamp="2026-02-16 21:53:31 +0000 UTC" firstStartedPulling="2026-02-16 21:53:35.180703077 +0000 UTC m=+947.833981968" lastFinishedPulling="2026-02-16 21:53:37.334717692 +0000 UTC m=+949.987996573" observedRunningTime="2026-02-16 21:53:37.935779974 +0000 UTC m=+950.589058925" watchObservedRunningTime="2026-02-16 21:53:37.941495967 +0000 UTC m=+950.594774858" Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.191576 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8bvkf" Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.943402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"de29cd64ae0e95818b6528fac977d7e511512a212e61ae9b3f51fdab28528cc2"} Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.943749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"374dcb7cdcf3e7d8fe0652e96de8d93b38f4847c06d61b8a279df95b03e49947"} Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.943763 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"4361a837cfa894e67fe77e5c081b539ee6c5c77425c0bd125b86605e3bf44a02"} Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.943773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"1d287a6e06347a23ec129d507fd7550973d3239bcda31bb71b513787674d1ac2"} Feb 16 21:53:38 crc kubenswrapper[4792]: I0216 21:53:38.943781 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"9012a465ecb592aa44b904d4598a32f5b17b5f3c0bcf57799ad8a6cfd2320df5"} Feb 16 21:53:39 crc kubenswrapper[4792]: I0216 21:53:39.953816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s7hh8" event={"ID":"de99e45c-01de-43eb-84bb-a601f9242155","Type":"ContainerStarted","Data":"599babe3648fcc5ebe8ff461e97aea479751db0cd488cf16a3df7d1fbb8af0c6"} Feb 16 21:53:39 crc kubenswrapper[4792]: I0216 21:53:39.954874 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:39 crc kubenswrapper[4792]: I0216 21:53:39.973284 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-s7hh8" podStartSLOduration=5.488380152 podStartE2EDuration="13.973264047s" podCreationTimestamp="2026-02-16 21:53:26 +0000 UTC" firstStartedPulling="2026-02-16 21:53:26.792486692 +0000 UTC m=+939.445765583" lastFinishedPulling="2026-02-16 21:53:35.277370587 +0000 UTC m=+947.930649478" observedRunningTime="2026-02-16 21:53:39.971580152 +0000 UTC m=+952.624859063" watchObservedRunningTime="2026-02-16 21:53:39.973264047 +0000 UTC m=+952.626542938" Feb 16 21:53:40 crc kubenswrapper[4792]: I0216 21:53:40.722676 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:40 crc kubenswrapper[4792]: I0216 21:53:40.722905 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgjxf" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="registry-server" containerID="cri-o://e78f174a2c449fc308154c7b485cbd74a8b28563e9373a6616069cff6a699b16" gracePeriod=2 Feb 16 21:53:40 crc kubenswrapper[4792]: I0216 21:53:40.970702 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerID="e78f174a2c449fc308154c7b485cbd74a8b28563e9373a6616069cff6a699b16" exitCode=0 Feb 16 21:53:40 crc kubenswrapper[4792]: I0216 21:53:40.970790 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerDied","Data":"e78f174a2c449fc308154c7b485cbd74a8b28563e9373a6616069cff6a699b16"} Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.160709 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.249002 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9h6s\" (UniqueName: \"kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s\") pod \"3e87d086-6c61-4dd8-9fce-3eb16952403f\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.249118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content\") pod \"3e87d086-6c61-4dd8-9fce-3eb16952403f\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.249223 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities\") pod \"3e87d086-6c61-4dd8-9fce-3eb16952403f\" (UID: \"3e87d086-6c61-4dd8-9fce-3eb16952403f\") " Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.250301 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities" (OuterVolumeSpecName: "utilities") pod "3e87d086-6c61-4dd8-9fce-3eb16952403f" (UID: "3e87d086-6c61-4dd8-9fce-3eb16952403f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.265827 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s" (OuterVolumeSpecName: "kube-api-access-z9h6s") pod "3e87d086-6c61-4dd8-9fce-3eb16952403f" (UID: "3e87d086-6c61-4dd8-9fce-3eb16952403f"). InnerVolumeSpecName "kube-api-access-z9h6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.270514 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e87d086-6c61-4dd8-9fce-3eb16952403f" (UID: "3e87d086-6c61-4dd8-9fce-3eb16952403f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.351735 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9h6s\" (UniqueName: \"kubernetes.io/projected/3e87d086-6c61-4dd8-9fce-3eb16952403f-kube-api-access-z9h6s\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.351776 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.351789 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e87d086-6c61-4dd8-9fce-3eb16952403f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.474158 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.474241 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.530124 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.582481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.616460 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.982159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgjxf" event={"ID":"3e87d086-6c61-4dd8-9fce-3eb16952403f","Type":"ContainerDied","Data":"5bc55f7002a030eb3cafc18be029baad2447f348d8c8d37971579568e72e4333"} Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.982225 4792 scope.go:117] "RemoveContainer" containerID="e78f174a2c449fc308154c7b485cbd74a8b28563e9373a6616069cff6a699b16" Feb 16 21:53:41 crc kubenswrapper[4792]: I0216 21:53:41.982263 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgjxf" Feb 16 21:53:42 crc kubenswrapper[4792]: I0216 21:53:42.017314 4792 scope.go:117] "RemoveContainer" containerID="898c3536fa85ddf178972f21a8ef93e3ed6b8a36d5666654d5317b1d975b7be4" Feb 16 21:53:42 crc kubenswrapper[4792]: I0216 21:53:42.042790 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:42 crc kubenswrapper[4792]: I0216 21:53:42.053299 4792 scope.go:117] "RemoveContainer" containerID="5b3c38f7b6d4a58eb7b81d14aecc23633476a9e916a79cb7dfa50fe2ded5ba20" Feb 16 21:53:42 crc kubenswrapper[4792]: I0216 21:53:42.055669 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgjxf"] Feb 16 21:53:44 crc kubenswrapper[4792]: I0216 21:53:44.037617 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" path="/var/lib/kubelet/pods/3e87d086-6c61-4dd8-9fce-3eb16952403f/volumes" Feb 16 21:53:46 crc kubenswrapper[4792]: I0216 21:53:46.596801 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zkb5q" Feb 16 21:53:46 crc kubenswrapper[4792]: I0216 21:53:46.700494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-pst5t" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.335917 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bmzkd"] Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336804 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="extract-content" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336818 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="extract-content" Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336835 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336841 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336865 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336870 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336882 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="extract-utilities" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336890 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="extract-utilities" Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336899 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="extract-content" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336904 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="extract-content" Feb 16 21:53:50 crc kubenswrapper[4792]: E0216 21:53:50.336914 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="extract-utilities" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.336920 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="extract-utilities" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.337046 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7902e27b-8b2d-4768-9930-4dfe6e0ffa4f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.337057 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e87d086-6c61-4dd8-9fce-3eb16952403f" containerName="registry-server" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.337645 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.340274 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.340315 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-q8tkl" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.341197 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.344307 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bmzkd"] Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.442525 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9825\" (UniqueName: \"kubernetes.io/projected/bf231cc9-0b32-43b0-ad49-55d1b28d977d-kube-api-access-n9825\") pod \"openstack-operator-index-bmzkd\" (UID: \"bf231cc9-0b32-43b0-ad49-55d1b28d977d\") " pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.544699 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9825\" (UniqueName: \"kubernetes.io/projected/bf231cc9-0b32-43b0-ad49-55d1b28d977d-kube-api-access-n9825\") pod \"openstack-operator-index-bmzkd\" (UID: \"bf231cc9-0b32-43b0-ad49-55d1b28d977d\") " pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.564303 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9825\" (UniqueName: \"kubernetes.io/projected/bf231cc9-0b32-43b0-ad49-55d1b28d977d-kube-api-access-n9825\") pod \"openstack-operator-index-bmzkd\" (UID: \"bf231cc9-0b32-43b0-ad49-55d1b28d977d\") " pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:53:50 crc kubenswrapper[4792]: I0216 21:53:50.660688 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:53:51 crc kubenswrapper[4792]: I0216 21:53:51.152030 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bmzkd"] Feb 16 21:53:51 crc kubenswrapper[4792]: I0216 21:53:51.536566 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:52 crc kubenswrapper[4792]: I0216 21:53:52.077150 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bmzkd" event={"ID":"bf231cc9-0b32-43b0-ad49-55d1b28d977d","Type":"ContainerStarted","Data":"5afabf2485c0ceab1f32c24605f9fc523d0d403c464b9fbb4fe535e1954b57a4"} Feb 16 21:53:54 crc kubenswrapper[4792]: I0216 21:53:54.093663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bmzkd" event={"ID":"bf231cc9-0b32-43b0-ad49-55d1b28d977d","Type":"ContainerStarted","Data":"f507654ed41a5da1952744ddd07490fe52f6db20aa88186bf3e74e8aa2cf0850"} Feb 16 21:53:54 crc kubenswrapper[4792]: I0216 21:53:54.925005 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bmzkd" podStartSLOduration=2.558793696 podStartE2EDuration="4.924981035s" podCreationTimestamp="2026-02-16 21:53:50 +0000 UTC" firstStartedPulling="2026-02-16 21:53:51.164635807 +0000 UTC m=+963.817914688" lastFinishedPulling="2026-02-16 21:53:53.530823096 +0000 UTC m=+966.184102027" observedRunningTime="2026-02-16 21:53:54.111279816 +0000 UTC m=+966.764558727" watchObservedRunningTime="2026-02-16 21:53:54.924981035 +0000 UTC m=+967.578260026" Feb 16 21:53:54 crc kubenswrapper[4792]: I0216 21:53:54.925589 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:54 crc kubenswrapper[4792]: I0216 21:53:54.925912 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tb8dq" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="registry-server" containerID="cri-o://73e9e687718d524d7944448415f0ebd3c72db4cb7e6b2e6373d6d5c70b2678ba" gracePeriod=2 Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.109310 4792 generic.go:334] "Generic (PLEG): container finished" podID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerID="73e9e687718d524d7944448415f0ebd3c72db4cb7e6b2e6373d6d5c70b2678ba" exitCode=0 Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.109717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerDied","Data":"73e9e687718d524d7944448415f0ebd3c72db4cb7e6b2e6373d6d5c70b2678ba"} Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.427746 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.522162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwprc\" (UniqueName: \"kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc\") pod \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.522348 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content\") pod \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.522492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities\") pod \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\" (UID: \"4e4d5363-baf6-48c6-84f6-31c0bd1796ec\") " Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.523961 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities" (OuterVolumeSpecName: "utilities") pod "4e4d5363-baf6-48c6-84f6-31c0bd1796ec" (UID: "4e4d5363-baf6-48c6-84f6-31c0bd1796ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.527090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc" (OuterVolumeSpecName: "kube-api-access-bwprc") pod "4e4d5363-baf6-48c6-84f6-31c0bd1796ec" (UID: "4e4d5363-baf6-48c6-84f6-31c0bd1796ec"). InnerVolumeSpecName "kube-api-access-bwprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.575099 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e4d5363-baf6-48c6-84f6-31c0bd1796ec" (UID: "4e4d5363-baf6-48c6-84f6-31c0bd1796ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.624782 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.625091 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwprc\" (UniqueName: \"kubernetes.io/projected/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-kube-api-access-bwprc\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:55 crc kubenswrapper[4792]: I0216 21:53:55.625104 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4d5363-baf6-48c6-84f6-31c0bd1796ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.124519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb8dq" event={"ID":"4e4d5363-baf6-48c6-84f6-31c0bd1796ec","Type":"ContainerDied","Data":"d88b5a6abf7cd13e21e21955e73e1b628a1f2fea32bd97fbf7f57f01a54a68ce"} Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.124595 4792 scope.go:117] "RemoveContainer" containerID="73e9e687718d524d7944448415f0ebd3c72db4cb7e6b2e6373d6d5c70b2678ba" Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.124847 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb8dq" Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.144345 4792 scope.go:117] "RemoveContainer" containerID="4f28246603029189bce6382e1627d81a19bd18e1257b2e330e5a2f2c37267bde" Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.158531 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.164716 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tb8dq"] Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.166706 4792 scope.go:117] "RemoveContainer" containerID="c280d091a4658ab7da672d0c31fc7b946de9ab25da79f21f6fc2d302e0ca3ee0" Feb 16 21:53:56 crc kubenswrapper[4792]: I0216 21:53:56.589133 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-s7hh8" Feb 16 21:53:58 crc kubenswrapper[4792]: I0216 21:53:58.035907 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" path="/var/lib/kubelet/pods/4e4d5363-baf6-48c6-84f6-31c0bd1796ec/volumes" Feb 16 21:54:00 crc kubenswrapper[4792]: I0216 21:54:00.661035 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:54:00 crc kubenswrapper[4792]: I0216 21:54:00.661397 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:54:00 crc kubenswrapper[4792]: I0216 21:54:00.692471 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:54:01 crc kubenswrapper[4792]: I0216 21:54:01.231776 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bmzkd" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.182338 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7"] Feb 16 21:54:06 crc kubenswrapper[4792]: E0216 21:54:06.182967 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="registry-server" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.182979 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="registry-server" Feb 16 21:54:06 crc kubenswrapper[4792]: E0216 21:54:06.182991 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="extract-content" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.182997 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="extract-content" Feb 16 21:54:06 crc kubenswrapper[4792]: E0216 21:54:06.183010 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="extract-utilities" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.183016 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="extract-utilities" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.183149 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4d5363-baf6-48c6-84f6-31c0bd1796ec" containerName="registry-server" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.184178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.188815 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-n8b86" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.204044 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7"] Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.328472 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.328538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.328566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rtn\" (UniqueName: \"kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.430474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.430530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rtn\" (UniqueName: \"kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.430658 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.431091 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.431304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.454222 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rtn\" (UniqueName: \"kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.506226 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:06 crc kubenswrapper[4792]: I0216 21:54:06.980877 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7"] Feb 16 21:54:07 crc kubenswrapper[4792]: I0216 21:54:07.258094 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerStarted","Data":"79f4bd807f39af108b4928136fce0796f2b905eaca4f48d8558b051d743ee733"} Feb 16 21:54:07 crc kubenswrapper[4792]: I0216 21:54:07.258224 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerStarted","Data":"618a4130c500f2874428adecd11b5dcfd86c044e254fe9b0cb09c0c86fb37002"} Feb 16 21:54:08 crc kubenswrapper[4792]: I0216 21:54:08.268614 4792 generic.go:334] "Generic (PLEG): container finished" podID="68b216db-f03f-4138-a015-d41cb53a6492" containerID="79f4bd807f39af108b4928136fce0796f2b905eaca4f48d8558b051d743ee733" exitCode=0 Feb 16 21:54:08 crc kubenswrapper[4792]: I0216 21:54:08.268664 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerDied","Data":"79f4bd807f39af108b4928136fce0796f2b905eaca4f48d8558b051d743ee733"} Feb 16 21:54:09 crc kubenswrapper[4792]: I0216 21:54:09.294277 4792 generic.go:334] "Generic (PLEG): container finished" podID="68b216db-f03f-4138-a015-d41cb53a6492" containerID="826bc1a92c5c293bd13af55c1b70049e0b394a34a14b3e68f3ef3f049f062ee0" exitCode=0 Feb 16 21:54:09 crc kubenswrapper[4792]: I0216 21:54:09.294410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerDied","Data":"826bc1a92c5c293bd13af55c1b70049e0b394a34a14b3e68f3ef3f049f062ee0"} Feb 16 21:54:10 crc kubenswrapper[4792]: I0216 21:54:10.304971 4792 generic.go:334] "Generic (PLEG): container finished" podID="68b216db-f03f-4138-a015-d41cb53a6492" containerID="6a3361a976a959cde2db925135a517e3d6a15d4277b81c943a746492976c6aef" exitCode=0 Feb 16 21:54:10 crc kubenswrapper[4792]: I0216 21:54:10.305041 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerDied","Data":"6a3361a976a959cde2db925135a517e3d6a15d4277b81c943a746492976c6aef"} Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.674149 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.825772 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rtn\" (UniqueName: \"kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn\") pod \"68b216db-f03f-4138-a015-d41cb53a6492\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.825834 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util\") pod \"68b216db-f03f-4138-a015-d41cb53a6492\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.825925 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle\") pod \"68b216db-f03f-4138-a015-d41cb53a6492\" (UID: \"68b216db-f03f-4138-a015-d41cb53a6492\") " Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.827043 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle" (OuterVolumeSpecName: "bundle") pod "68b216db-f03f-4138-a015-d41cb53a6492" (UID: "68b216db-f03f-4138-a015-d41cb53a6492"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.831985 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn" (OuterVolumeSpecName: "kube-api-access-g4rtn") pod "68b216db-f03f-4138-a015-d41cb53a6492" (UID: "68b216db-f03f-4138-a015-d41cb53a6492"). InnerVolumeSpecName "kube-api-access-g4rtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.846577 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util" (OuterVolumeSpecName: "util") pod "68b216db-f03f-4138-a015-d41cb53a6492" (UID: "68b216db-f03f-4138-a015-d41cb53a6492"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.927319 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rtn\" (UniqueName: \"kubernetes.io/projected/68b216db-f03f-4138-a015-d41cb53a6492-kube-api-access-g4rtn\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.927537 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-util\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:11 crc kubenswrapper[4792]: I0216 21:54:11.927641 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68b216db-f03f-4138-a015-d41cb53a6492-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:54:12 crc kubenswrapper[4792]: I0216 21:54:12.325503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" event={"ID":"68b216db-f03f-4138-a015-d41cb53a6492","Type":"ContainerDied","Data":"618a4130c500f2874428adecd11b5dcfd86c044e254fe9b0cb09c0c86fb37002"} Feb 16 21:54:12 crc kubenswrapper[4792]: I0216 21:54:12.325573 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618a4130c500f2874428adecd11b5dcfd86c044e254fe9b0cb09c0c86fb37002" Feb 16 21:54:12 crc kubenswrapper[4792]: I0216 21:54:12.325640 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.739113 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn"] Feb 16 21:54:15 crc kubenswrapper[4792]: E0216 21:54:15.740374 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="extract" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.740393 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="extract" Feb 16 21:54:15 crc kubenswrapper[4792]: E0216 21:54:15.740423 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="pull" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.740433 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="pull" Feb 16 21:54:15 crc kubenswrapper[4792]: E0216 21:54:15.740462 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="util" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.740469 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="util" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.740686 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b216db-f03f-4138-a015-d41cb53a6492" containerName="extract" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.745011 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.747827 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5wzl6" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.765490 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn"] Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.896876 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvc4s\" (UniqueName: \"kubernetes.io/projected/9fe0c3a6-98f0-4c15-926e-b9b4e05711db-kube-api-access-qvc4s\") pod \"openstack-operator-controller-init-7845fcf9cf-frtrn\" (UID: \"9fe0c3a6-98f0-4c15-926e-b9b4e05711db\") " pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:15 crc kubenswrapper[4792]: I0216 21:54:15.998712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvc4s\" (UniqueName: \"kubernetes.io/projected/9fe0c3a6-98f0-4c15-926e-b9b4e05711db-kube-api-access-qvc4s\") pod \"openstack-operator-controller-init-7845fcf9cf-frtrn\" (UID: \"9fe0c3a6-98f0-4c15-926e-b9b4e05711db\") " pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:16 crc kubenswrapper[4792]: I0216 21:54:16.019872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvc4s\" (UniqueName: \"kubernetes.io/projected/9fe0c3a6-98f0-4c15-926e-b9b4e05711db-kube-api-access-qvc4s\") pod \"openstack-operator-controller-init-7845fcf9cf-frtrn\" (UID: \"9fe0c3a6-98f0-4c15-926e-b9b4e05711db\") " pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:16 crc kubenswrapper[4792]: I0216 21:54:16.063454 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:16 crc kubenswrapper[4792]: I0216 21:54:16.713730 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn"] Feb 16 21:54:16 crc kubenswrapper[4792]: W0216 21:54:16.715041 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe0c3a6_98f0_4c15_926e_b9b4e05711db.slice/crio-fafdc83a2254a4b9928c9f4f9de321a99669d3b04eb557d92e50d1adef780661 WatchSource:0}: Error finding container fafdc83a2254a4b9928c9f4f9de321a99669d3b04eb557d92e50d1adef780661: Status 404 returned error can't find the container with id fafdc83a2254a4b9928c9f4f9de321a99669d3b04eb557d92e50d1adef780661 Feb 16 21:54:17 crc kubenswrapper[4792]: I0216 21:54:17.375813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" event={"ID":"9fe0c3a6-98f0-4c15-926e-b9b4e05711db","Type":"ContainerStarted","Data":"fafdc83a2254a4b9928c9f4f9de321a99669d3b04eb557d92e50d1adef780661"} Feb 16 21:54:21 crc kubenswrapper[4792]: I0216 21:54:21.407423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" event={"ID":"9fe0c3a6-98f0-4c15-926e-b9b4e05711db","Type":"ContainerStarted","Data":"d3c68f6d674aa837809d34afc7dd6eb43bd678691a0184e77d5a856ea766b1d9"} Feb 16 21:54:21 crc kubenswrapper[4792]: I0216 21:54:21.407844 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:26 crc kubenswrapper[4792]: I0216 21:54:26.066808 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" Feb 16 21:54:26 crc kubenswrapper[4792]: I0216 21:54:26.100893 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7845fcf9cf-frtrn" podStartSLOduration=6.959244275 podStartE2EDuration="11.100873597s" podCreationTimestamp="2026-02-16 21:54:15 +0000 UTC" firstStartedPulling="2026-02-16 21:54:16.717678078 +0000 UTC m=+989.370956979" lastFinishedPulling="2026-02-16 21:54:20.85930741 +0000 UTC m=+993.512586301" observedRunningTime="2026-02-16 21:54:21.436867898 +0000 UTC m=+994.090146809" watchObservedRunningTime="2026-02-16 21:54:26.100873597 +0000 UTC m=+998.754152498" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.885511 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.889782 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.897420 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.898628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.904028 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-l2wsf" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.904462 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-p87pv" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.920776 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.933148 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.942227 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.958772 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-68zdd"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.960006 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.960006 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.963466 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-txqg7" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.963783 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rrxcw" Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.982449 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l"] Feb 16 21:55:02 crc kubenswrapper[4792]: I0216 21:55:02.990448 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:02.991750 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:02.994053 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g5zwq" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.005572 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5b5m\" (UniqueName: \"kubernetes.io/projected/0031ef47-8c9b-43e3-8484-f1400d13b1c0-kube-api-access-w5b5m\") pod \"cinder-operator-controller-manager-5d946d989d-q68hm\" (UID: \"0031ef47-8c9b-43e3-8484-f1400d13b1c0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.005705 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbj9t\" (UniqueName: \"kubernetes.io/projected/3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5-kube-api-access-zbj9t\") pod \"barbican-operator-controller-manager-868647ff47-ckk8x\" (UID: \"3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.041142 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-68zdd"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.078127 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.079565 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.083269 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2t24q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.118578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smctr\" (UniqueName: \"kubernetes.io/projected/14a0a678-34ee-46ea-97b2-dda55282c312-kube-api-access-smctr\") pod \"heat-operator-controller-manager-69f49c598c-kwchw\" (UID: \"14a0a678-34ee-46ea-97b2-dda55282c312\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.119640 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbj9t\" (UniqueName: \"kubernetes.io/projected/3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5-kube-api-access-zbj9t\") pod \"barbican-operator-controller-manager-868647ff47-ckk8x\" (UID: \"3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.119691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24ss\" (UniqueName: \"kubernetes.io/projected/2c61991d-c4f0-4ac4-81af-951bbb318042-kube-api-access-l24ss\") pod \"horizon-operator-controller-manager-5b9b8895d5-c7g29\" (UID: \"2c61991d-c4f0-4ac4-81af-951bbb318042\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.119766 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc25l\" (UniqueName: \"kubernetes.io/projected/e79f0a7a-0416-4cbe-b6ec-c52db85aae80-kube-api-access-dc25l\") pod \"glance-operator-controller-manager-77987464f4-68zdd\" (UID: \"e79f0a7a-0416-4cbe-b6ec-c52db85aae80\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.119869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5b5m\" (UniqueName: \"kubernetes.io/projected/0031ef47-8c9b-43e3-8484-f1400d13b1c0-kube-api-access-w5b5m\") pod \"cinder-operator-controller-manager-5d946d989d-q68hm\" (UID: \"0031ef47-8c9b-43e3-8484-f1400d13b1c0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.119913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl4tt\" (UniqueName: \"kubernetes.io/projected/f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f-kube-api-access-rl4tt\") pod \"designate-operator-controller-manager-6d8bf5c495-bdq8l\" (UID: \"f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.127343 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.156541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5b5m\" (UniqueName: \"kubernetes.io/projected/0031ef47-8c9b-43e3-8484-f1400d13b1c0-kube-api-access-w5b5m\") pod \"cinder-operator-controller-manager-5d946d989d-q68hm\" (UID: \"0031ef47-8c9b-43e3-8484-f1400d13b1c0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.156571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbj9t\" (UniqueName: \"kubernetes.io/projected/3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5-kube-api-access-zbj9t\") pod \"barbican-operator-controller-manager-868647ff47-ckk8x\" (UID: \"3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.183499 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.190455 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-d52s2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.191588 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.196343 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dc5s4" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.204094 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221030 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smctr\" (UniqueName: \"kubernetes.io/projected/14a0a678-34ee-46ea-97b2-dda55282c312-kube-api-access-smctr\") pod \"heat-operator-controller-manager-69f49c598c-kwchw\" (UID: \"14a0a678-34ee-46ea-97b2-dda55282c312\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24ss\" (UniqueName: \"kubernetes.io/projected/2c61991d-c4f0-4ac4-81af-951bbb318042-kube-api-access-l24ss\") pod \"horizon-operator-controller-manager-5b9b8895d5-c7g29\" (UID: \"2c61991d-c4f0-4ac4-81af-951bbb318042\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc25l\" (UniqueName: \"kubernetes.io/projected/e79f0a7a-0416-4cbe-b6ec-c52db85aae80-kube-api-access-dc25l\") pod \"glance-operator-controller-manager-77987464f4-68zdd\" (UID: \"e79f0a7a-0416-4cbe-b6ec-c52db85aae80\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221154 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221188 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl4tt\" (UniqueName: \"kubernetes.io/projected/f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f-kube-api-access-rl4tt\") pod \"designate-operator-controller-manager-6d8bf5c495-bdq8l\" (UID: \"f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.221232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcwb\" (UniqueName: \"kubernetes.io/projected/0ca1643f-fcdd-4500-b446-06862c80c736-kube-api-access-fmcwb\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.224523 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-d52s2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.246015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.250803 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc25l\" (UniqueName: \"kubernetes.io/projected/e79f0a7a-0416-4cbe-b6ec-c52db85aae80-kube-api-access-dc25l\") pod \"glance-operator-controller-manager-77987464f4-68zdd\" (UID: \"e79f0a7a-0416-4cbe-b6ec-c52db85aae80\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.261388 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.267644 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.268717 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.276514 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5kw68" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.308438 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.315399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smctr\" (UniqueName: \"kubernetes.io/projected/14a0a678-34ee-46ea-97b2-dda55282c312-kube-api-access-smctr\") pod \"heat-operator-controller-manager-69f49c598c-kwchw\" (UID: \"14a0a678-34ee-46ea-97b2-dda55282c312\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.316355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24ss\" (UniqueName: \"kubernetes.io/projected/2c61991d-c4f0-4ac4-81af-951bbb318042-kube-api-access-l24ss\") pod \"horizon-operator-controller-manager-5b9b8895d5-c7g29\" (UID: \"2c61991d-c4f0-4ac4-81af-951bbb318042\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.315403 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl4tt\" (UniqueName: \"kubernetes.io/projected/f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f-kube-api-access-rl4tt\") pod \"designate-operator-controller-manager-6d8bf5c495-bdq8l\" (UID: \"f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.322450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcwb\" (UniqueName: \"kubernetes.io/projected/0ca1643f-fcdd-4500-b446-06862c80c736-kube-api-access-fmcwb\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.322578 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.322669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94bj\" (UniqueName: \"kubernetes.io/projected/3552825c-be0d-4a97-9caf-f8a1ceb96564-kube-api-access-c94bj\") pod \"ironic-operator-controller-manager-554564d7fc-5jfgv\" (UID: \"3552825c-be0d-4a97-9caf-f8a1ceb96564\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:03 crc kubenswrapper[4792]: E0216 21:55:03.323148 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:03 crc kubenswrapper[4792]: E0216 21:55:03.323199 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert podName:0ca1643f-fcdd-4500-b446-06862c80c736 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:03.823180082 +0000 UTC m=+1036.476458973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert") pod "infra-operator-controller-manager-79d975b745-d52s2" (UID: "0ca1643f-fcdd-4500-b446-06862c80c736") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.350100 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.357853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcwb\" (UniqueName: \"kubernetes.io/projected/0ca1643f-fcdd-4500-b446-06862c80c736-kube-api-access-fmcwb\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.369131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.413679 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.414819 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.423435 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jhj5n" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.424358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfjn\" (UniqueName: \"kubernetes.io/projected/8b18ef30-f020-4cf7-8068-69f90696ac66-kube-api-access-dkfjn\") pod \"keystone-operator-controller-manager-b4d948c87-n9g6q\" (UID: \"8b18ef30-f020-4cf7-8068-69f90696ac66\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.424395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94bj\" (UniqueName: \"kubernetes.io/projected/3552825c-be0d-4a97-9caf-f8a1ceb96564-kube-api-access-c94bj\") pod \"ironic-operator-controller-manager-554564d7fc-5jfgv\" (UID: \"3552825c-be0d-4a97-9caf-f8a1ceb96564\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.426666 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.427008 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.431228 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.447414 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94bj\" (UniqueName: \"kubernetes.io/projected/3552825c-be0d-4a97-9caf-f8a1ceb96564-kube-api-access-c94bj\") pod \"ironic-operator-controller-manager-554564d7fc-5jfgv\" (UID: \"3552825c-be0d-4a97-9caf-f8a1ceb96564\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.504658 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.506171 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.511167 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2v8fj" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.530214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8b4r\" (UniqueName: \"kubernetes.io/projected/bd4eda7b-78cc-4c87-9210-6c9581ad3fab-kube-api-access-b8b4r\") pod \"manila-operator-controller-manager-54f6768c69-xl8k2\" (UID: \"bd4eda7b-78cc-4c87-9210-6c9581ad3fab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.530314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkfjn\" (UniqueName: \"kubernetes.io/projected/8b18ef30-f020-4cf7-8068-69f90696ac66-kube-api-access-dkfjn\") pod \"keystone-operator-controller-manager-b4d948c87-n9g6q\" (UID: \"8b18ef30-f020-4cf7-8068-69f90696ac66\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.594984 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.596564 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.605399 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-k8747" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.641513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8b4r\" (UniqueName: \"kubernetes.io/projected/bd4eda7b-78cc-4c87-9210-6c9581ad3fab-kube-api-access-b8b4r\") pod \"manila-operator-controller-manager-54f6768c69-xl8k2\" (UID: \"bd4eda7b-78cc-4c87-9210-6c9581ad3fab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.641612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6l4\" (UniqueName: \"kubernetes.io/projected/16470449-37c4-419d-8932-f0c7ee201aaa-kube-api-access-fp6l4\") pod \"mariadb-operator-controller-manager-6994f66f48-gsjf4\" (UID: \"16470449-37c4-419d-8932-f0c7ee201aaa\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.652239 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.660677 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkfjn\" (UniqueName: \"kubernetes.io/projected/8b18ef30-f020-4cf7-8068-69f90696ac66-kube-api-access-dkfjn\") pod \"keystone-operator-controller-manager-b4d948c87-n9g6q\" (UID: \"8b18ef30-f020-4cf7-8068-69f90696ac66\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.680156 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.723202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8b4r\" (UniqueName: \"kubernetes.io/projected/bd4eda7b-78cc-4c87-9210-6c9581ad3fab-kube-api-access-b8b4r\") pod \"manila-operator-controller-manager-54f6768c69-xl8k2\" (UID: \"bd4eda7b-78cc-4c87-9210-6c9581ad3fab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.739228 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.745304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6l4\" (UniqueName: \"kubernetes.io/projected/16470449-37c4-419d-8932-f0c7ee201aaa-kube-api-access-fp6l4\") pod \"mariadb-operator-controller-manager-6994f66f48-gsjf4\" (UID: \"16470449-37c4-419d-8932-f0c7ee201aaa\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.765021 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.766989 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.770388 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.784281 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.785208 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vn7cm" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.787055 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-52tpk" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.794461 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6l4\" (UniqueName: \"kubernetes.io/projected/16470449-37c4-419d-8932-f0c7ee201aaa-kube-api-access-fp6l4\") pod \"mariadb-operator-controller-manager-6994f66f48-gsjf4\" (UID: \"16470449-37c4-419d-8932-f0c7ee201aaa\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.820801 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.849242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznds\" (UniqueName: \"kubernetes.io/projected/47b9a9f7-c72f-45ae-96ea-1e8b19065304-kube-api-access-wznds\") pod \"nova-operator-controller-manager-567668f5cf-8fcb2\" (UID: \"47b9a9f7-c72f-45ae-96ea-1e8b19065304\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.849341 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:03 crc kubenswrapper[4792]: E0216 21:55:03.849586 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:03 crc kubenswrapper[4792]: E0216 21:55:03.849665 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert podName:0ca1643f-fcdd-4500-b446-06862c80c736 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:04.849650297 +0000 UTC m=+1037.502929188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert") pod "infra-operator-controller-manager-79d975b745-d52s2" (UID: "0ca1643f-fcdd-4500-b446-06862c80c736") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.879542 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.912421 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.913648 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.921838 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l6q7f" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.953700 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.967663 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9"] Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.987095 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:03 crc kubenswrapper[4792]: I0216 21:55:03.987711 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:03.991523 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:03.993331 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.000092 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.001379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznds\" (UniqueName: \"kubernetes.io/projected/47b9a9f7-c72f-45ae-96ea-1e8b19065304-kube-api-access-wznds\") pod \"nova-operator-controller-manager-567668f5cf-8fcb2\" (UID: \"47b9a9f7-c72f-45ae-96ea-1e8b19065304\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.001934 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6zlbl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.002070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsj5\" (UniqueName: \"kubernetes.io/projected/8d8bb033-cde2-41c5-9ac9-ea761df10203-kube-api-access-tmsj5\") pod \"octavia-operator-controller-manager-69f8888797-xklb9\" (UID: \"8d8bb033-cde2-41c5-9ac9-ea761df10203\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.013822 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.015264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.022247 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xvvv7" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.027906 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.040767 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznds\" (UniqueName: \"kubernetes.io/projected/47b9a9f7-c72f-45ae-96ea-1e8b19065304-kube-api-access-wznds\") pod \"nova-operator-controller-manager-567668f5cf-8fcb2\" (UID: \"47b9a9f7-c72f-45ae-96ea-1e8b19065304\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.090451 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.090479 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.091336 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.091926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.091983 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.092052 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.096023 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-cpzqw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.096072 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6kxzq" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107064 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gzz\" (UniqueName: \"kubernetes.io/projected/7bd0c0a5-5844-4906-bafc-1806ca7901a7-kube-api-access-d6gzz\") pod \"ovn-operator-controller-manager-d44cf6b75-8qm72\" (UID: \"7bd0c0a5-5844-4906-bafc-1806ca7901a7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107112 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wlr\" (UniqueName: \"kubernetes.io/projected/bd719b4e-7fbb-48d2-ab0f-3a0257fe4070-kube-api-access-44wlr\") pod \"neutron-operator-controller-manager-64ddbf8bb-bzg6v\" (UID: \"bd719b4e-7fbb-48d2-ab0f-3a0257fe4070\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lkq\" (UniqueName: \"kubernetes.io/projected/1afa399d-c3b2-4ad7-a61d-b139e3a975ae-kube-api-access-k6lkq\") pod \"swift-operator-controller-manager-68f46476f-bxt7g\" (UID: \"1afa399d-c3b2-4ad7-a61d-b139e3a975ae\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgshr\" (UniqueName: \"kubernetes.io/projected/6d7fec09-c983-4893-b691-10fec0ee2206-kube-api-access-dgshr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvlw\" (UniqueName: \"kubernetes.io/projected/545d4d3f-7ef6-413d-a879-59591fbb7f16-kube-api-access-mqvlw\") pod \"placement-operator-controller-manager-8497b45c89-ld8dz\" (UID: \"545d4d3f-7ef6-413d-a879-59591fbb7f16\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.107357 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsj5\" (UniqueName: \"kubernetes.io/projected/8d8bb033-cde2-41c5-9ac9-ea761df10203-kube-api-access-tmsj5\") pod \"octavia-operator-controller-manager-69f8888797-xklb9\" (UID: \"8d8bb033-cde2-41c5-9ac9-ea761df10203\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.122103 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.125673 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.130687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsj5\" (UniqueName: \"kubernetes.io/projected/8d8bb033-cde2-41c5-9ac9-ea761df10203-kube-api-access-tmsj5\") pod \"octavia-operator-controller-manager-69f8888797-xklb9\" (UID: \"8d8bb033-cde2-41c5-9ac9-ea761df10203\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.166407 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.167501 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.168984 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hth29" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209026 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgshr\" (UniqueName: \"kubernetes.io/projected/6d7fec09-c983-4893-b691-10fec0ee2206-kube-api-access-dgshr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209069 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvlw\" (UniqueName: \"kubernetes.io/projected/545d4d3f-7ef6-413d-a879-59591fbb7f16-kube-api-access-mqvlw\") pod \"placement-operator-controller-manager-8497b45c89-ld8dz\" (UID: \"545d4d3f-7ef6-413d-a879-59591fbb7f16\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gzz\" (UniqueName: \"kubernetes.io/projected/7bd0c0a5-5844-4906-bafc-1806ca7901a7-kube-api-access-d6gzz\") pod \"ovn-operator-controller-manager-d44cf6b75-8qm72\" (UID: \"7bd0c0a5-5844-4906-bafc-1806ca7901a7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44wlr\" (UniqueName: \"kubernetes.io/projected/bd719b4e-7fbb-48d2-ab0f-3a0257fe4070-kube-api-access-44wlr\") pod \"neutron-operator-controller-manager-64ddbf8bb-bzg6v\" (UID: \"bd719b4e-7fbb-48d2-ab0f-3a0257fe4070\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.209274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lkq\" (UniqueName: \"kubernetes.io/projected/1afa399d-c3b2-4ad7-a61d-b139e3a975ae-kube-api-access-k6lkq\") pod \"swift-operator-controller-manager-68f46476f-bxt7g\" (UID: \"1afa399d-c3b2-4ad7-a61d-b139e3a975ae\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.209755 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.209804 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:04.709787136 +0000 UTC m=+1037.363066027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.222129 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.244631 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-6nlgl"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.245987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.249966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44wlr\" (UniqueName: \"kubernetes.io/projected/bd719b4e-7fbb-48d2-ab0f-3a0257fe4070-kube-api-access-44wlr\") pod \"neutron-operator-controller-manager-64ddbf8bb-bzg6v\" (UID: \"bd719b4e-7fbb-48d2-ab0f-3a0257fe4070\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.250253 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2cwqb" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.252646 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lkq\" (UniqueName: \"kubernetes.io/projected/1afa399d-c3b2-4ad7-a61d-b139e3a975ae-kube-api-access-k6lkq\") pod \"swift-operator-controller-manager-68f46476f-bxt7g\" (UID: \"1afa399d-c3b2-4ad7-a61d-b139e3a975ae\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.255860 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgshr\" (UniqueName: \"kubernetes.io/projected/6d7fec09-c983-4893-b691-10fec0ee2206-kube-api-access-dgshr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.257161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gzz\" (UniqueName: \"kubernetes.io/projected/7bd0c0a5-5844-4906-bafc-1806ca7901a7-kube-api-access-d6gzz\") pod \"ovn-operator-controller-manager-d44cf6b75-8qm72\" (UID: \"7bd0c0a5-5844-4906-bafc-1806ca7901a7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.257216 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvlw\" (UniqueName: \"kubernetes.io/projected/545d4d3f-7ef6-413d-a879-59591fbb7f16-kube-api-access-mqvlw\") pod \"placement-operator-controller-manager-8497b45c89-ld8dz\" (UID: \"545d4d3f-7ef6-413d-a879-59591fbb7f16\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.266847 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-6nlgl"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.267209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:04 crc kubenswrapper[4792]: W0216 21:55:04.276920 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3198bf1a_e4e7_4f1b_bc18_79581f4cc1c5.slice/crio-d1e9d37894a99a0c91948787975d716922c56e6ec7d55556154e22c852fcfaf2 WatchSource:0}: Error finding container d1e9d37894a99a0c91948787975d716922c56e6ec7d55556154e22c852fcfaf2: Status 404 returned error can't find the container with id d1e9d37894a99a0c91948787975d716922c56e6ec7d55556154e22c852fcfaf2 Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.286541 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.287945 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.292179 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-lz7x4" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.296891 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.321197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqmj\" (UniqueName: \"kubernetes.io/projected/be6b1607-d6a3-4970-80c3-e1368db4877e-kube-api-access-5lqmj\") pod \"test-operator-controller-manager-7866795846-6nlgl\" (UID: \"be6b1607-d6a3-4970-80c3-e1368db4877e\") " pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.321288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzts\" (UniqueName: \"kubernetes.io/projected/fe04b110-3ba2-468b-ae82-ae43720f03ad-kube-api-access-fxzts\") pod \"telemetry-operator-controller-manager-79996fd568-rkdpn\" (UID: \"fe04b110-3ba2-468b-ae82-ae43720f03ad\") " pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.323681 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.359903 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.361007 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.363502 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.363756 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.364306 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cldfn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.375381 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.406630 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.407814 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.411438 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.422398 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-w4zcj" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.423658 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lqmj\" (UniqueName: \"kubernetes.io/projected/be6b1607-d6a3-4970-80c3-e1368db4877e-kube-api-access-5lqmj\") pod \"test-operator-controller-manager-7866795846-6nlgl\" (UID: \"be6b1607-d6a3-4970-80c3-e1368db4877e\") " pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.423690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zml7\" (UniqueName: \"kubernetes.io/projected/7b4f7a7e-b90d-4210-8254-ae10083bf021-kube-api-access-2zml7\") pod \"watcher-operator-controller-manager-5db88f68c-qc68s\" (UID: \"7b4f7a7e-b90d-4210-8254-ae10083bf021\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.423719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzts\" (UniqueName: \"kubernetes.io/projected/fe04b110-3ba2-468b-ae82-ae43720f03ad-kube-api-access-fxzts\") pod \"telemetry-operator-controller-manager-79996fd568-rkdpn\" (UID: \"fe04b110-3ba2-468b-ae82-ae43720f03ad\") " pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.448736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lqmj\" (UniqueName: \"kubernetes.io/projected/be6b1607-d6a3-4970-80c3-e1368db4877e-kube-api-access-5lqmj\") pod \"test-operator-controller-manager-7866795846-6nlgl\" (UID: \"be6b1607-d6a3-4970-80c3-e1368db4877e\") " pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.450214 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.451735 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.465102 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzts\" (UniqueName: \"kubernetes.io/projected/fe04b110-3ba2-468b-ae82-ae43720f03ad-kube-api-access-fxzts\") pod \"telemetry-operator-controller-manager-79996fd568-rkdpn\" (UID: \"fe04b110-3ba2-468b-ae82-ae43720f03ad\") " pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.507374 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.522566 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.526610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.526663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.526689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vp2m\" (UniqueName: \"kubernetes.io/projected/63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee-kube-api-access-8vp2m\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6qzwl\" (UID: \"63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.526713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zml7\" (UniqueName: \"kubernetes.io/projected/7b4f7a7e-b90d-4210-8254-ae10083bf021-kube-api-access-2zml7\") pod \"watcher-operator-controller-manager-5db88f68c-qc68s\" (UID: \"7b4f7a7e-b90d-4210-8254-ae10083bf021\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.526767 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlvm\" (UniqueName: \"kubernetes.io/projected/4b00b428-3d0e-4120-a21c-7722e529fde5-kube-api-access-njlvm\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.555873 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zml7\" (UniqueName: \"kubernetes.io/projected/7b4f7a7e-b90d-4210-8254-ae10083bf021-kube-api-access-2zml7\") pod \"watcher-operator-controller-manager-5db88f68c-qc68s\" (UID: \"7b4f7a7e-b90d-4210-8254-ae10083bf021\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.608657 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-68zdd"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.614179 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.628110 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njlvm\" (UniqueName: \"kubernetes.io/projected/4b00b428-3d0e-4120-a21c-7722e529fde5-kube-api-access-njlvm\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.628291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.628352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.628387 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vp2m\" (UniqueName: \"kubernetes.io/projected/63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee-kube-api-access-8vp2m\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6qzwl\" (UID: \"63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.629176 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.629231 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:05.129213826 +0000 UTC m=+1037.782492727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.629394 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.629439 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:05.129429222 +0000 UTC m=+1037.782708113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.654227 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njlvm\" (UniqueName: \"kubernetes.io/projected/4b00b428-3d0e-4120-a21c-7722e529fde5-kube-api-access-njlvm\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.664260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.665720 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vp2m\" (UniqueName: \"kubernetes.io/projected/63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee-kube-api-access-8vp2m\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6qzwl\" (UID: \"63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.691421 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.702196 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.731414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.731756 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.731809 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:05.731795683 +0000 UTC m=+1038.385074574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.750155 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.869530 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" event={"ID":"e79f0a7a-0416-4cbe-b6ec-c52db85aae80","Type":"ContainerStarted","Data":"ed1c76c395cff8ea3716223ee18f5296466a7bd03974160af34f5efdd20dd631"} Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.870329 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" event={"ID":"3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5","Type":"ContainerStarted","Data":"d1e9d37894a99a0c91948787975d716922c56e6ec7d55556154e22c852fcfaf2"} Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.872094 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" event={"ID":"0031ef47-8c9b-43e3-8484-f1400d13b1c0","Type":"ContainerStarted","Data":"9efb6503ae1d4349bf7e6cac05c0b2b602b7d1fcf1c5979b5388e6b7056cda6e"} Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.934896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.935101 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: E0216 21:55:04.935152 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert podName:0ca1643f-fcdd-4500-b446-06862c80c736 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:06.935138399 +0000 UTC m=+1039.588417290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert") pod "infra-operator-controller-manager-79d975b745-d52s2" (UID: "0ca1643f-fcdd-4500-b446-06862c80c736") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.971200 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l"] Feb 16 21:55:04 crc kubenswrapper[4792]: W0216 21:55:04.979028 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3552825c_be0d_4a97_9caf_f8a1ceb96564.slice/crio-b171d6d373c0b3f3047390b7dc23e3d8898b919509188ffa2174462ff73cb6ef WatchSource:0}: Error finding container b171d6d373c0b3f3047390b7dc23e3d8898b919509188ffa2174462ff73cb6ef: Status 404 returned error can't find the container with id b171d6d373c0b3f3047390b7dc23e3d8898b919509188ffa2174462ff73cb6ef Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.984647 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv"] Feb 16 21:55:04 crc kubenswrapper[4792]: I0216 21:55:04.999481 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.006793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.141497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.141564 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.141758 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.141802 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:06.141788993 +0000 UTC m=+1038.795067884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.142161 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.142190 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:06.142183233 +0000 UTC m=+1038.795462114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.416864 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.460584 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2"] Feb 16 21:55:05 crc kubenswrapper[4792]: W0216 21:55:05.481234 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b9a9f7_c72f_45ae_96ea_1e8b19065304.slice/crio-c3cd9157a5baa8c4af87b044fbcae26b01a3cad060b638d270b45c7035949ff9 WatchSource:0}: Error finding container c3cd9157a5baa8c4af87b044fbcae26b01a3cad060b638d270b45c7035949ff9: Status 404 returned error can't find the container with id c3cd9157a5baa8c4af87b044fbcae26b01a3cad060b638d270b45c7035949ff9 Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.481374 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.495980 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.774051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.774278 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: E0216 21:55:05.774334 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:07.774319247 +0000 UTC m=+1040.427598138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.852873 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.866166 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.894507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" event={"ID":"8b18ef30-f020-4cf7-8068-69f90696ac66","Type":"ContainerStarted","Data":"6a775c3c2800da9066d6349e1f5d24a33364980ea91f837b37d7f2788552f3ca"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.897957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" event={"ID":"47b9a9f7-c72f-45ae-96ea-1e8b19065304","Type":"ContainerStarted","Data":"c3cd9157a5baa8c4af87b044fbcae26b01a3cad060b638d270b45c7035949ff9"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.904064 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.916072 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v"] Feb 16 21:55:05 crc kubenswrapper[4792]: W0216 21:55:05.929972 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod545d4d3f_7ef6_413d_a879_59591fbb7f16.slice/crio-ef0a4895a2265433441b973c8b582e5d3a74841a188b5c6861c6283031b30e6d WatchSource:0}: Error finding container ef0a4895a2265433441b973c8b582e5d3a74841a188b5c6861c6283031b30e6d: Status 404 returned error can't find the container with id ef0a4895a2265433441b973c8b582e5d3a74841a188b5c6861c6283031b30e6d Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.930827 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn"] Feb 16 21:55:05 crc kubenswrapper[4792]: W0216 21:55:05.930900 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe04b110_3ba2_468b_ae82_ae43720f03ad.slice/crio-23d6e3fc4b9810f681767d1b07af85e2c2347c3f297e068cbde5717d7c278d60 WatchSource:0}: Error finding container 23d6e3fc4b9810f681767d1b07af85e2c2347c3f297e068cbde5717d7c278d60: Status 404 returned error can't find the container with id 23d6e3fc4b9810f681767d1b07af85e2c2347c3f297e068cbde5717d7c278d60 Feb 16 21:55:05 crc kubenswrapper[4792]: W0216 21:55:05.933309 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd719b4e_7fbb_48d2_ab0f_3a0257fe4070.slice/crio-c0ccbae28b9ee72b2451d2c5256bdfaec6dde5aa5d4f0dae514203aba0f701c0 WatchSource:0}: Error finding container c0ccbae28b9ee72b2451d2c5256bdfaec6dde5aa5d4f0dae514203aba0f701c0: Status 404 returned error can't find the container with id c0ccbae28b9ee72b2451d2c5256bdfaec6dde5aa5d4f0dae514203aba0f701c0 Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.933944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" event={"ID":"f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f","Type":"ContainerStarted","Data":"1bc8d8535991474100aef3fa72705e6afd95f45f06c621cd60a095d8837a8fbf"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.937981 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" event={"ID":"bd4eda7b-78cc-4c87-9210-6c9581ad3fab","Type":"ContainerStarted","Data":"01853f13e325936e94ede441984016466704154a62d9f00c91d0e8979722a548"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.941200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" event={"ID":"16470449-37c4-419d-8932-f0c7ee201aaa","Type":"ContainerStarted","Data":"330d79d35cb686c91928e6562d77db1418e3bd99ffec923288195aa34074f520"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.942414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz"] Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.946351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" event={"ID":"14a0a678-34ee-46ea-97b2-dda55282c312","Type":"ContainerStarted","Data":"2f4b4fc72246f19ce0093cac4c6518d632adc01ab20fc5916453cc7ca89fe651"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.950862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" event={"ID":"2c61991d-c4f0-4ac4-81af-951bbb318042","Type":"ContainerStarted","Data":"84aa831dadafe91aa7623f398546a04ce603eec007caed297bbc7a7ec8099052"} Feb 16 21:55:05 crc kubenswrapper[4792]: I0216 21:55:05.952535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" event={"ID":"3552825c-be0d-4a97-9caf-f8a1ceb96564","Type":"ContainerStarted","Data":"b171d6d373c0b3f3047390b7dc23e3d8898b919509188ffa2174462ff73cb6ef"} Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.181344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.181398 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.181536 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.181589 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:08.181573224 +0000 UTC m=+1040.834852115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.181939 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.181968 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:08.181960135 +0000 UTC m=+1040.835239026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.228287 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-6nlgl"] Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.239142 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s"] Feb 16 21:55:06 crc kubenswrapper[4792]: W0216 21:55:06.252081 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe6b1607_d6a3_4970_80c3_e1368db4877e.slice/crio-58764dbbc4255c48ff24f67a3ab704ce00fa6847ddd139fe1fca9cd7ac2e394f WatchSource:0}: Error finding container 58764dbbc4255c48ff24f67a3ab704ce00fa6847ddd139fe1fca9cd7ac2e394f: Status 404 returned error can't find the container with id 58764dbbc4255c48ff24f67a3ab704ce00fa6847ddd139fe1fca9cd7ac2e394f Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.270275 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl"] Feb 16 21:55:06 crc kubenswrapper[4792]: W0216 21:55:06.291898 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b5bb19_3cd9_4c45_a3a7_8c01e0a2a3ee.slice/crio-a7781a401c37924f9591b5903062b904374bfe816c7e3f9cc77886a6bfc89e7a WatchSource:0}: Error finding container a7781a401c37924f9591b5903062b904374bfe816c7e3f9cc77886a6bfc89e7a: Status 404 returned error can't find the container with id a7781a401c37924f9591b5903062b904374bfe816c7e3f9cc77886a6bfc89e7a Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.303315 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vp2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6qzwl_openstack-operators(63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.304858 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" podUID="63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee" Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.978957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" event={"ID":"7b4f7a7e-b90d-4210-8254-ae10083bf021","Type":"ContainerStarted","Data":"72afcd02e2495f79ba0929834be19a424195649fa9b7ceccec41c0ebbc732e2f"} Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.981069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" event={"ID":"7bd0c0a5-5844-4906-bafc-1806ca7901a7","Type":"ContainerStarted","Data":"3067e9538fcc8826f1afcefe2a664d16c4e48f550633a44381e1a7eb89d35cb7"} Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.986953 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" event={"ID":"63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee","Type":"ContainerStarted","Data":"a7781a401c37924f9591b5903062b904374bfe816c7e3f9cc77886a6bfc89e7a"} Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.989030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" podUID="63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee" Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.993892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" event={"ID":"fe04b110-3ba2-468b-ae82-ae43720f03ad","Type":"ContainerStarted","Data":"23d6e3fc4b9810f681767d1b07af85e2c2347c3f297e068cbde5717d7c278d60"} Feb 16 21:55:06 crc kubenswrapper[4792]: I0216 21:55:06.997626 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.997821 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:06 crc kubenswrapper[4792]: E0216 21:55:06.997880 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert podName:0ca1643f-fcdd-4500-b446-06862c80c736 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:10.997864613 +0000 UTC m=+1043.651143504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert") pod "infra-operator-controller-manager-79d975b745-d52s2" (UID: "0ca1643f-fcdd-4500-b446-06862c80c736") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.012545 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" event={"ID":"8d8bb033-cde2-41c5-9ac9-ea761df10203","Type":"ContainerStarted","Data":"b4d3b2709345e64d686a9bc7c1645673c7d420dd9ea1660989b8f799b2faca88"} Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.015063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" event={"ID":"bd719b4e-7fbb-48d2-ab0f-3a0257fe4070","Type":"ContainerStarted","Data":"c0ccbae28b9ee72b2451d2c5256bdfaec6dde5aa5d4f0dae514203aba0f701c0"} Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.018855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" event={"ID":"be6b1607-d6a3-4970-80c3-e1368db4877e","Type":"ContainerStarted","Data":"58764dbbc4255c48ff24f67a3ab704ce00fa6847ddd139fe1fca9cd7ac2e394f"} Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.022158 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" event={"ID":"545d4d3f-7ef6-413d-a879-59591fbb7f16","Type":"ContainerStarted","Data":"ef0a4895a2265433441b973c8b582e5d3a74841a188b5c6861c6283031b30e6d"} Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.025963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" event={"ID":"1afa399d-c3b2-4ad7-a61d-b139e3a975ae","Type":"ContainerStarted","Data":"094e3cba5a4d99278ae619afdc021e2b6647b2d5527b168b1831497ded354ffb"} Feb 16 21:55:07 crc kubenswrapper[4792]: I0216 21:55:07.814359 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:07 crc kubenswrapper[4792]: E0216 21:55:07.814500 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:07 crc kubenswrapper[4792]: E0216 21:55:07.814570 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:11.814552662 +0000 UTC m=+1044.467831543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:08 crc kubenswrapper[4792]: E0216 21:55:08.065695 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" podUID="63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee" Feb 16 21:55:08 crc kubenswrapper[4792]: I0216 21:55:08.224287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:08 crc kubenswrapper[4792]: I0216 21:55:08.224637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:08 crc kubenswrapper[4792]: E0216 21:55:08.224769 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:08 crc kubenswrapper[4792]: E0216 21:55:08.224857 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:12.224840608 +0000 UTC m=+1044.878119499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:08 crc kubenswrapper[4792]: E0216 21:55:08.225262 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:08 crc kubenswrapper[4792]: E0216 21:55:08.225389 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:12.225344431 +0000 UTC m=+1044.878623322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:11 crc kubenswrapper[4792]: I0216 21:55:11.080707 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:11 crc kubenswrapper[4792]: E0216 21:55:11.080877 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:11 crc kubenswrapper[4792]: E0216 21:55:11.081765 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert podName:0ca1643f-fcdd-4500-b446-06862c80c736 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:19.081717833 +0000 UTC m=+1051.734996724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert") pod "infra-operator-controller-manager-79d975b745-d52s2" (UID: "0ca1643f-fcdd-4500-b446-06862c80c736") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:55:11 crc kubenswrapper[4792]: I0216 21:55:11.894444 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:11 crc kubenswrapper[4792]: E0216 21:55:11.894654 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:11 crc kubenswrapper[4792]: E0216 21:55:11.894740 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:19.894716782 +0000 UTC m=+1052.547995673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:12 crc kubenswrapper[4792]: I0216 21:55:12.301404 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:12 crc kubenswrapper[4792]: I0216 21:55:12.301486 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:12 crc kubenswrapper[4792]: E0216 21:55:12.301624 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:12 crc kubenswrapper[4792]: E0216 21:55:12.301702 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:20.301681898 +0000 UTC m=+1052.954960799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:12 crc kubenswrapper[4792]: E0216 21:55:12.301796 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:12 crc kubenswrapper[4792]: E0216 21:55:12.301857 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:20.301838923 +0000 UTC m=+1052.955117914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:17 crc kubenswrapper[4792]: E0216 21:55:17.824302 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 21:55:17 crc kubenswrapper[4792]: E0216 21:55:17.825215 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l24ss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-c7g29_openstack-operators(2c61991d-c4f0-4ac4-81af-951bbb318042): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:17 crc kubenswrapper[4792]: E0216 21:55:17.827018 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" podUID="2c61991d-c4f0-4ac4-81af-951bbb318042" Feb 16 21:55:18 crc kubenswrapper[4792]: E0216 21:55:18.148180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" podUID="2c61991d-c4f0-4ac4-81af-951bbb318042" Feb 16 21:55:18 crc kubenswrapper[4792]: E0216 21:55:18.720586 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 21:55:18 crc kubenswrapper[4792]: E0216 21:55:18.720861 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmsj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-xklb9_openstack-operators(8d8bb033-cde2-41c5-9ac9-ea761df10203): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:18 crc kubenswrapper[4792]: E0216 21:55:18.722683 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" podUID="8d8bb033-cde2-41c5-9ac9-ea761df10203" Feb 16 21:55:19 crc kubenswrapper[4792]: I0216 21:55:19.128634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:19 crc kubenswrapper[4792]: I0216 21:55:19.148249 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ca1643f-fcdd-4500-b446-06862c80c736-cert\") pod \"infra-operator-controller-manager-79d975b745-d52s2\" (UID: \"0ca1643f-fcdd-4500-b446-06862c80c736\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.160573 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" podUID="8d8bb033-cde2-41c5-9ac9-ea761df10203" Feb 16 21:55:19 crc kubenswrapper[4792]: I0216 21:55:19.251945 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.331344 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.331518 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zml7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-qc68s_openstack-operators(7b4f7a7e-b90d-4210-8254-ae10083bf021): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.333008 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" podUID="7b4f7a7e-b90d-4210-8254-ae10083bf021" Feb 16 21:55:19 crc kubenswrapper[4792]: I0216 21:55:19.942395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.942583 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:19 crc kubenswrapper[4792]: E0216 21:55:19.942673 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert podName:6d7fec09-c983-4893-b691-10fec0ee2206 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:35.942653868 +0000 UTC m=+1068.595932759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" (UID: "6d7fec09-c983-4893-b691-10fec0ee2206") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.166239 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" podUID="7b4f7a7e-b90d-4210-8254-ae10083bf021" Feb 16 21:55:20 crc kubenswrapper[4792]: I0216 21:55:20.348288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:20 crc kubenswrapper[4792]: I0216 21:55:20.348369 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.348417 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.348485 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.348486 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:36.348467104 +0000 UTC m=+1069.001745995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "webhook-server-cert" not found Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.348530 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs podName:4b00b428-3d0e-4120-a21c-7722e529fde5 nodeName:}" failed. No retries permitted until 2026-02-16 21:55:36.348520505 +0000 UTC m=+1069.001799396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs") pod "openstack-operator-controller-manager-9c8f544df-6dgqv" (UID: "4b00b428-3d0e-4120-a21c-7722e529fde5") : secret "metrics-server-cert" not found Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.724281 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.724848 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6gzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-8qm72_openstack-operators(7bd0c0a5-5844-4906-bafc-1806ca7901a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:20 crc kubenswrapper[4792]: E0216 21:55:20.726084 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" podUID="7bd0c0a5-5844-4906-bafc-1806ca7901a7" Feb 16 21:55:21 crc kubenswrapper[4792]: E0216 21:55:21.175879 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" podUID="7bd0c0a5-5844-4906-bafc-1806ca7901a7" Feb 16 21:55:21 crc kubenswrapper[4792]: E0216 21:55:21.267319 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 16 21:55:21 crc kubenswrapper[4792]: E0216 21:55:21.267466 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqvlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-ld8dz_openstack-operators(545d4d3f-7ef6-413d-a879-59591fbb7f16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:21 crc kubenswrapper[4792]: E0216 21:55:21.268671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" podUID="545d4d3f-7ef6-413d-a879-59591fbb7f16" Feb 16 21:55:22 crc kubenswrapper[4792]: E0216 21:55:22.184526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" podUID="545d4d3f-7ef6-413d-a879-59591fbb7f16" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.170319 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.170533 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8b4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-xl8k2_openstack-operators(bd4eda7b-78cc-4c87-9210-6c9581ad3fab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.171965 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" podUID="bd4eda7b-78cc-4c87-9210-6c9581ad3fab" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.194395 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" podUID="bd4eda7b-78cc-4c87-9210-6c9581ad3fab" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.747896 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.748338 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44wlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-bzg6v_openstack-operators(bd719b4e-7fbb-48d2-ab0f-3a0257fe4070): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:23 crc kubenswrapper[4792]: E0216 21:55:23.750205 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" podUID="bd719b4e-7fbb-48d2-ab0f-3a0257fe4070" Feb 16 21:55:24 crc kubenswrapper[4792]: E0216 21:55:24.207126 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" podUID="bd719b4e-7fbb-48d2-ab0f-3a0257fe4070" Feb 16 21:55:24 crc kubenswrapper[4792]: E0216 21:55:24.344555 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 16 21:55:24 crc kubenswrapper[4792]: E0216 21:55:24.344736 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lqmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-6nlgl_openstack-operators(be6b1607-d6a3-4970-80c3-e1368db4877e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:24 crc kubenswrapper[4792]: E0216 21:55:24.345993 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" podUID="be6b1607-d6a3-4970-80c3-e1368db4877e" Feb 16 21:55:25 crc kubenswrapper[4792]: E0216 21:55:25.212829 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" podUID="be6b1607-d6a3-4970-80c3-e1368db4877e" Feb 16 21:55:26 crc kubenswrapper[4792]: E0216 21:55:26.062568 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 21:55:26 crc kubenswrapper[4792]: E0216 21:55:26.063070 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6lkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-bxt7g_openstack-operators(1afa399d-c3b2-4ad7-a61d-b139e3a975ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:26 crc kubenswrapper[4792]: E0216 21:55:26.064205 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" podUID="1afa399d-c3b2-4ad7-a61d-b139e3a975ae" Feb 16 21:55:26 crc kubenswrapper[4792]: E0216 21:55:26.219970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" podUID="1afa399d-c3b2-4ad7-a61d-b139e3a975ae" Feb 16 21:55:27 crc kubenswrapper[4792]: E0216 21:55:27.219821 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 21:55:27 crc kubenswrapper[4792]: E0216 21:55:27.219988 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp6l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-gsjf4_openstack-operators(16470449-37c4-419d-8932-f0c7ee201aaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:27 crc kubenswrapper[4792]: E0216 21:55:27.221093 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" podUID="16470449-37c4-419d-8932-f0c7ee201aaa" Feb 16 21:55:28 crc kubenswrapper[4792]: E0216 21:55:28.237057 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" podUID="16470449-37c4-419d-8932-f0c7ee201aaa" Feb 16 21:55:28 crc kubenswrapper[4792]: E0216 21:55:28.822551 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 21:55:28 crc kubenswrapper[4792]: E0216 21:55:28.822624 4792 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 21:55:28 crc kubenswrapper[4792]: E0216 21:55:28.822778 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxzts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-79996fd568-rkdpn_openstack-operators(fe04b110-3ba2-468b-ae82-ae43720f03ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:28 crc kubenswrapper[4792]: E0216 21:55:28.824041 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" podUID="fe04b110-3ba2-468b-ae82-ae43720f03ad" Feb 16 21:55:29 crc kubenswrapper[4792]: E0216 21:55:29.242992 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" podUID="fe04b110-3ba2-468b-ae82-ae43720f03ad" Feb 16 21:55:30 crc kubenswrapper[4792]: E0216 21:55:30.533771 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 21:55:30 crc kubenswrapper[4792]: E0216 21:55:30.534056 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wznds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-8fcb2_openstack-operators(47b9a9f7-c72f-45ae-96ea-1e8b19065304): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:30 crc kubenswrapper[4792]: E0216 21:55:30.535357 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" podUID="47b9a9f7-c72f-45ae-96ea-1e8b19065304" Feb 16 21:55:31 crc kubenswrapper[4792]: E0216 21:55:31.177732 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 21:55:31 crc kubenswrapper[4792]: E0216 21:55:31.178188 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkfjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-n9g6q_openstack-operators(8b18ef30-f020-4cf7-8068-69f90696ac66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:55:31 crc kubenswrapper[4792]: E0216 21:55:31.179435 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" podUID="8b18ef30-f020-4cf7-8068-69f90696ac66" Feb 16 21:55:31 crc kubenswrapper[4792]: E0216 21:55:31.266794 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" podUID="8b18ef30-f020-4cf7-8068-69f90696ac66" Feb 16 21:55:31 crc kubenswrapper[4792]: E0216 21:55:31.266836 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" podUID="47b9a9f7-c72f-45ae-96ea-1e8b19065304" Feb 16 21:55:31 crc kubenswrapper[4792]: I0216 21:55:31.589440 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-d52s2"] Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.273504 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" event={"ID":"e79f0a7a-0416-4cbe-b6ec-c52db85aae80","Type":"ContainerStarted","Data":"d22ddabc418a85efb55dd04e3d123700e39dff8a4e7615fd16a1fbe401558ab2"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.274012 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.274726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" event={"ID":"0ca1643f-fcdd-4500-b446-06862c80c736","Type":"ContainerStarted","Data":"3aeeedd1d3baf0caa43f4b01f0154d80d57ebff2bbfecfa401525baa412a0ccc"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.276765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" event={"ID":"f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f","Type":"ContainerStarted","Data":"493fdb018fd79ee1ca87f64be7a40f52da78bf4180958d43267c8f2777408bcf"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.276893 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.279286 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" event={"ID":"3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5","Type":"ContainerStarted","Data":"03e560a9637b25ba23acc4667be1f2fd4b3b360cb3a55f35a01880ba628efa87"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.279402 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.281259 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" event={"ID":"14a0a678-34ee-46ea-97b2-dda55282c312","Type":"ContainerStarted","Data":"6b7bf3c0d2d7b05d5f510a2fd1d3915ebccea5b9eae637fd5b596334f4244133"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.281637 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.283342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" event={"ID":"3552825c-be0d-4a97-9caf-f8a1ceb96564","Type":"ContainerStarted","Data":"2c4907fab9369f93d609e6ca186d8b1289054966250e09416a9289e521db35cf"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.283485 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.286836 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" event={"ID":"63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee","Type":"ContainerStarted","Data":"2a45549ab59c90e5b78f1b660b3145851baa3678a80db9b1950f139fe7fd40aa"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.288915 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" event={"ID":"0031ef47-8c9b-43e3-8484-f1400d13b1c0","Type":"ContainerStarted","Data":"54f5067600944a1818ca68ff23c34fd0e9c4c01fd64c973c9b6796889e95c099"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.289020 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.290652 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" event={"ID":"8d8bb033-cde2-41c5-9ac9-ea761df10203","Type":"ContainerStarted","Data":"5b0834a9a363c5fe1748fd722a652fe651a32dadb6907e24ebd14417be6a6440"} Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.290813 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.294961 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" podStartSLOduration=4.294442097 podStartE2EDuration="30.294946548s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:04.507128012 +0000 UTC m=+1037.160406903" lastFinishedPulling="2026-02-16 21:55:30.507632463 +0000 UTC m=+1063.160911354" observedRunningTime="2026-02-16 21:55:32.289308737 +0000 UTC m=+1064.942587628" watchObservedRunningTime="2026-02-16 21:55:32.294946548 +0000 UTC m=+1064.948225439" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.309854 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" podStartSLOduration=4.807481182 podStartE2EDuration="30.309837458s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.006289314 +0000 UTC m=+1037.659568205" lastFinishedPulling="2026-02-16 21:55:30.5086456 +0000 UTC m=+1063.161924481" observedRunningTime="2026-02-16 21:55:32.306123338 +0000 UTC m=+1064.959402229" watchObservedRunningTime="2026-02-16 21:55:32.309837458 +0000 UTC m=+1064.963116349" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.332872 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" podStartSLOduration=4.814224297 podStartE2EDuration="30.332854655s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:04.990139305 +0000 UTC m=+1037.643418196" lastFinishedPulling="2026-02-16 21:55:30.508769623 +0000 UTC m=+1063.162048554" observedRunningTime="2026-02-16 21:55:32.331988831 +0000 UTC m=+1064.985267722" watchObservedRunningTime="2026-02-16 21:55:32.332854655 +0000 UTC m=+1064.986133546" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.364030 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" podStartSLOduration=4.384777873 podStartE2EDuration="30.36400963s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:04.528443717 +0000 UTC m=+1037.181722608" lastFinishedPulling="2026-02-16 21:55:30.507675474 +0000 UTC m=+1063.160954365" observedRunningTime="2026-02-16 21:55:32.350832967 +0000 UTC m=+1065.004111878" watchObservedRunningTime="2026-02-16 21:55:32.36400963 +0000 UTC m=+1065.017288521" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.389458 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6qzwl" podStartSLOduration=3.8732894780000002 podStartE2EDuration="29.389431402s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:06.303110243 +0000 UTC m=+1038.956389134" lastFinishedPulling="2026-02-16 21:55:31.819252167 +0000 UTC m=+1064.472531058" observedRunningTime="2026-02-16 21:55:32.381947932 +0000 UTC m=+1065.035226823" watchObservedRunningTime="2026-02-16 21:55:32.389431402 +0000 UTC m=+1065.042710293" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.404856 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" podStartSLOduration=4.762916396 podStartE2EDuration="30.404841885s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:04.321175987 +0000 UTC m=+1036.974454878" lastFinishedPulling="2026-02-16 21:55:29.963101476 +0000 UTC m=+1062.616380367" observedRunningTime="2026-02-16 21:55:32.402621056 +0000 UTC m=+1065.055899957" watchObservedRunningTime="2026-02-16 21:55:32.404841885 +0000 UTC m=+1065.058120766" Feb 16 21:55:32 crc kubenswrapper[4792]: I0216 21:55:32.419542 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" podStartSLOduration=4.900156773 podStartE2EDuration="30.41952452s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:04.988265956 +0000 UTC m=+1037.641544847" lastFinishedPulling="2026-02-16 21:55:30.507633703 +0000 UTC m=+1063.160912594" observedRunningTime="2026-02-16 21:55:32.41841603 +0000 UTC m=+1065.071694931" watchObservedRunningTime="2026-02-16 21:55:32.41952452 +0000 UTC m=+1065.072803411" Feb 16 21:55:33 crc kubenswrapper[4792]: I0216 21:55:33.096345 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" podStartSLOduration=4.173026438 podStartE2EDuration="30.096319274s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.892002205 +0000 UTC m=+1038.545281096" lastFinishedPulling="2026-02-16 21:55:31.815295031 +0000 UTC m=+1064.468573932" observedRunningTime="2026-02-16 21:55:32.464579378 +0000 UTC m=+1065.117858269" watchObservedRunningTime="2026-02-16 21:55:33.096319274 +0000 UTC m=+1065.749598175" Feb 16 21:55:33 crc kubenswrapper[4792]: I0216 21:55:33.299346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" event={"ID":"7b4f7a7e-b90d-4210-8254-ae10083bf021","Type":"ContainerStarted","Data":"c47ce4f4cb487c321b00eef6e918fa463c29f234c1859169aa1d8a1aea890c8e"} Feb 16 21:55:33 crc kubenswrapper[4792]: I0216 21:55:33.301971 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:33 crc kubenswrapper[4792]: I0216 21:55:33.318373 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" podStartSLOduration=4.14970416 podStartE2EDuration="30.318355221s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:06.25613983 +0000 UTC m=+1038.909418721" lastFinishedPulling="2026-02-16 21:55:32.424790891 +0000 UTC m=+1065.078069782" observedRunningTime="2026-02-16 21:55:33.3138283 +0000 UTC m=+1065.967107191" watchObservedRunningTime="2026-02-16 21:55:33.318355221 +0000 UTC m=+1065.971634112" Feb 16 21:55:35 crc kubenswrapper[4792]: I0216 21:55:35.316246 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" event={"ID":"2c61991d-c4f0-4ac4-81af-951bbb318042","Type":"ContainerStarted","Data":"e0ab31d4b807f897ba97e240d96e62a1e58ab16918dcb02c164964f43121b0a9"} Feb 16 21:55:35 crc kubenswrapper[4792]: I0216 21:55:35.317522 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:35 crc kubenswrapper[4792]: I0216 21:55:35.345812 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" podStartSLOduration=5.84678125 podStartE2EDuration="33.345791247s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.00654262 +0000 UTC m=+1037.659821511" lastFinishedPulling="2026-02-16 21:55:32.505552617 +0000 UTC m=+1065.158831508" observedRunningTime="2026-02-16 21:55:35.343873985 +0000 UTC m=+1067.997152886" watchObservedRunningTime="2026-02-16 21:55:35.345791247 +0000 UTC m=+1067.999070138" Feb 16 21:55:35 crc kubenswrapper[4792]: I0216 21:55:35.947032 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:35 crc kubenswrapper[4792]: I0216 21:55:35.953776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d7fec09-c983-4893-b691-10fec0ee2206-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw\" (UID: \"6d7fec09-c983-4893-b691-10fec0ee2206\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.188633 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.325249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" event={"ID":"0ca1643f-fcdd-4500-b446-06862c80c736","Type":"ContainerStarted","Data":"03de1a24945843e3d838ca8c72bbbd4399a43ff4e3359260f9b4f7e9ad917fd7"} Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.325393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.328388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" event={"ID":"7bd0c0a5-5844-4906-bafc-1806ca7901a7","Type":"ContainerStarted","Data":"5f2c44aa3e4ed43d89588370680fc35a2a1b7d3373c29dad55284be73b4a344d"} Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.328648 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.351920 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" podStartSLOduration=30.135067608 podStartE2EDuration="34.351901795s" podCreationTimestamp="2026-02-16 21:55:02 +0000 UTC" firstStartedPulling="2026-02-16 21:55:31.726859559 +0000 UTC m=+1064.380138450" lastFinishedPulling="2026-02-16 21:55:35.943693756 +0000 UTC m=+1068.596972637" observedRunningTime="2026-02-16 21:55:36.346374987 +0000 UTC m=+1068.999653878" watchObservedRunningTime="2026-02-16 21:55:36.351901795 +0000 UTC m=+1069.005180686" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.356275 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.356337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.361565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-metrics-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.361572 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b00b428-3d0e-4120-a21c-7722e529fde5-webhook-certs\") pod \"openstack-operator-controller-manager-9c8f544df-6dgqv\" (UID: \"4b00b428-3d0e-4120-a21c-7722e529fde5\") " pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.376892 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" podStartSLOduration=4.090747925 podStartE2EDuration="33.376876635s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.93338536 +0000 UTC m=+1038.586664251" lastFinishedPulling="2026-02-16 21:55:35.21951407 +0000 UTC m=+1067.872792961" observedRunningTime="2026-02-16 21:55:36.373960507 +0000 UTC m=+1069.027239388" watchObservedRunningTime="2026-02-16 21:55:36.376876635 +0000 UTC m=+1069.030155526" Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.517999 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:36 crc kubenswrapper[4792]: W0216 21:55:36.848272 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d7fec09_c983_4893_b691_10fec0ee2206.slice/crio-4862ae6be24d302f1db3b8191674144353dcd89bac9e01f1c8548ff693df35ec WatchSource:0}: Error finding container 4862ae6be24d302f1db3b8191674144353dcd89bac9e01f1c8548ff693df35ec: Status 404 returned error can't find the container with id 4862ae6be24d302f1db3b8191674144353dcd89bac9e01f1c8548ff693df35ec Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.859643 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw"] Feb 16 21:55:36 crc kubenswrapper[4792]: I0216 21:55:36.928716 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv"] Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.337317 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" event={"ID":"4b00b428-3d0e-4120-a21c-7722e529fde5","Type":"ContainerStarted","Data":"7fe2d2ea733fb27f63152fb685458f620bd02ef9379cd48763847990203fcc35"} Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.337367 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" event={"ID":"4b00b428-3d0e-4120-a21c-7722e529fde5","Type":"ContainerStarted","Data":"da00b05498a5edb0ca972cae8edc14a34e852b8d7ddf956d2a0d1589251a3205"} Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.337386 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.338256 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" event={"ID":"6d7fec09-c983-4893-b691-10fec0ee2206","Type":"ContainerStarted","Data":"4862ae6be24d302f1db3b8191674144353dcd89bac9e01f1c8548ff693df35ec"} Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.339557 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" event={"ID":"545d4d3f-7ef6-413d-a879-59591fbb7f16","Type":"ContainerStarted","Data":"3a7ab78f7a173345515971db3fca5573f16a023f655ea6a203a9d86429e81e43"} Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.339925 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.371906 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" podStartSLOduration=34.371885797 podStartE2EDuration="34.371885797s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:55:37.364469058 +0000 UTC m=+1070.017747949" watchObservedRunningTime="2026-02-16 21:55:37.371885797 +0000 UTC m=+1070.025164688" Feb 16 21:55:37 crc kubenswrapper[4792]: I0216 21:55:37.386211 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" podStartSLOduration=3.562824364 podStartE2EDuration="34.38619457s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.935513267 +0000 UTC m=+1038.588792158" lastFinishedPulling="2026-02-16 21:55:36.758883473 +0000 UTC m=+1069.412162364" observedRunningTime="2026-02-16 21:55:37.382635945 +0000 UTC m=+1070.035914836" watchObservedRunningTime="2026-02-16 21:55:37.38619457 +0000 UTC m=+1070.039473461" Feb 16 21:55:38 crc kubenswrapper[4792]: I0216 21:55:38.357944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" event={"ID":"be6b1607-d6a3-4970-80c3-e1368db4877e","Type":"ContainerStarted","Data":"061e0e2b93909fe26fc34fe64a9e05bf8a1323e7183c486095b297d3b60682f9"} Feb 16 21:55:38 crc kubenswrapper[4792]: I0216 21:55:38.381773 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" podStartSLOduration=4.12294234 podStartE2EDuration="35.381735776s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:06.257177356 +0000 UTC m=+1038.910456247" lastFinishedPulling="2026-02-16 21:55:37.515970782 +0000 UTC m=+1070.169249683" observedRunningTime="2026-02-16 21:55:38.373952048 +0000 UTC m=+1071.027230939" watchObservedRunningTime="2026-02-16 21:55:38.381735776 +0000 UTC m=+1071.035014667" Feb 16 21:55:39 crc kubenswrapper[4792]: I0216 21:55:39.375382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" event={"ID":"6d7fec09-c983-4893-b691-10fec0ee2206","Type":"ContainerStarted","Data":"bad6cd5bb5b7001c1ffbb6674adafa4b8fedc199097435e5fd9e986f40358278"} Feb 16 21:55:39 crc kubenswrapper[4792]: I0216 21:55:39.376392 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:39 crc kubenswrapper[4792]: I0216 21:55:39.378408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" event={"ID":"bd4eda7b-78cc-4c87-9210-6c9581ad3fab","Type":"ContainerStarted","Data":"5f011e70562c07595070fa62f16e5b5c7fd33ffa8bd6924cb1ef8a5324df5302"} Feb 16 21:55:39 crc kubenswrapper[4792]: I0216 21:55:39.378552 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:39 crc kubenswrapper[4792]: I0216 21:55:39.445159 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" podStartSLOduration=34.407676987 podStartE2EDuration="36.445141662s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:36.856406949 +0000 UTC m=+1069.509685840" lastFinishedPulling="2026-02-16 21:55:38.893871624 +0000 UTC m=+1071.547150515" observedRunningTime="2026-02-16 21:55:39.406659099 +0000 UTC m=+1072.059937990" watchObservedRunningTime="2026-02-16 21:55:39.445141662 +0000 UTC m=+1072.098420553" Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.386364 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" event={"ID":"1afa399d-c3b2-4ad7-a61d-b139e3a975ae","Type":"ContainerStarted","Data":"a6d758cec0acb871c15224591e27ca2a7e202cc515c73b85017d49b1e9027b67"} Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.386901 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.388747 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" event={"ID":"bd719b4e-7fbb-48d2-ab0f-3a0257fe4070","Type":"ContainerStarted","Data":"82e92069c7dbb9039d7279fa3a346fb278b8991880bbfff3f4b0fea4cc679fda"} Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.389068 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.404689 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" podStartSLOduration=3.880535117 podStartE2EDuration="37.404666491s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.933521524 +0000 UTC m=+1038.586800415" lastFinishedPulling="2026-02-16 21:55:39.457652898 +0000 UTC m=+1072.110931789" observedRunningTime="2026-02-16 21:55:40.399893333 +0000 UTC m=+1073.053172234" watchObservedRunningTime="2026-02-16 21:55:40.404666491 +0000 UTC m=+1073.057945382" Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.405349 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" podStartSLOduration=3.978594976 podStartE2EDuration="37.405343139s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.46373106 +0000 UTC m=+1038.117009941" lastFinishedPulling="2026-02-16 21:55:38.890479213 +0000 UTC m=+1071.543758104" observedRunningTime="2026-02-16 21:55:39.448191654 +0000 UTC m=+1072.101470545" watchObservedRunningTime="2026-02-16 21:55:40.405343139 +0000 UTC m=+1073.058622030" Feb 16 21:55:40 crc kubenswrapper[4792]: I0216 21:55:40.421220 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" podStartSLOduration=3.900710738 podStartE2EDuration="37.421201505s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.937029007 +0000 UTC m=+1038.590307898" lastFinishedPulling="2026-02-16 21:55:39.457519774 +0000 UTC m=+1072.110798665" observedRunningTime="2026-02-16 21:55:40.41691951 +0000 UTC m=+1073.070198411" watchObservedRunningTime="2026-02-16 21:55:40.421201505 +0000 UTC m=+1073.074480396" Feb 16 21:55:42 crc kubenswrapper[4792]: I0216 21:55:42.404802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" event={"ID":"fe04b110-3ba2-468b-ae82-ae43720f03ad","Type":"ContainerStarted","Data":"ef77a31736312cfc13aeffe7b52008b1f5b9bf80ca7ae4a5e604df9b1df3a509"} Feb 16 21:55:42 crc kubenswrapper[4792]: I0216 21:55:42.405319 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:55:42 crc kubenswrapper[4792]: I0216 21:55:42.420761 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" podStartSLOduration=3.247457251 podStartE2EDuration="39.420742662s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.946470627 +0000 UTC m=+1038.599749508" lastFinishedPulling="2026-02-16 21:55:42.119756028 +0000 UTC m=+1074.773034919" observedRunningTime="2026-02-16 21:55:42.41805174 +0000 UTC m=+1075.071330631" watchObservedRunningTime="2026-02-16 21:55:42.420742662 +0000 UTC m=+1075.074021553" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.249717 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ckk8x" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.265788 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-q68hm" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.311422 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-68zdd" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.353726 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bdq8l" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.373746 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kwchw" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.412452 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" event={"ID":"47b9a9f7-c72f-45ae-96ea-1e8b19065304","Type":"ContainerStarted","Data":"3c9d45b21cb84c98f0d625ce5bfda49bbc389dd8d44ccfb33b503b9731a1e25d"} Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.413426 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.414896 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" event={"ID":"16470449-37c4-419d-8932-f0c7ee201aaa","Type":"ContainerStarted","Data":"5db22b59f6dc4e293b153b6cc01e7dfd4f12d1562d8ae79dd18550a6fcb21453"} Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.415260 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.431313 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c7g29" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.469046 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" podStartSLOduration=3.473674958 podStartE2EDuration="40.469029013s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.503729869 +0000 UTC m=+1038.157008760" lastFinishedPulling="2026-02-16 21:55:42.499083914 +0000 UTC m=+1075.152362815" observedRunningTime="2026-02-16 21:55:43.462923279 +0000 UTC m=+1076.116202170" watchObservedRunningTime="2026-02-16 21:55:43.469029013 +0000 UTC m=+1076.122307904" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.492572 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" podStartSLOduration=3.503401234 podStartE2EDuration="40.492552783s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.508314912 +0000 UTC m=+1038.161593803" lastFinishedPulling="2026-02-16 21:55:42.497466471 +0000 UTC m=+1075.150745352" observedRunningTime="2026-02-16 21:55:43.484002364 +0000 UTC m=+1076.137281255" watchObservedRunningTime="2026-02-16 21:55:43.492552783 +0000 UTC m=+1076.145831674" Feb 16 21:55:43 crc kubenswrapper[4792]: I0216 21:55:43.683512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5jfgv" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.271376 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xklb9" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.328132 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bxt7g" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.418230 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-ld8dz" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.453984 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-8qm72" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.525555 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-bzg6v" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.691883 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.694257 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-6nlgl" Feb 16 21:55:44 crc kubenswrapper[4792]: I0216 21:55:44.709963 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qc68s" Feb 16 21:55:46 crc kubenswrapper[4792]: I0216 21:55:46.194942 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw" Feb 16 21:55:46 crc kubenswrapper[4792]: I0216 21:55:46.525246 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9c8f544df-6dgqv" Feb 16 21:55:48 crc kubenswrapper[4792]: I0216 21:55:48.463691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" event={"ID":"8b18ef30-f020-4cf7-8068-69f90696ac66","Type":"ContainerStarted","Data":"41102c3659e40d075e8205ef8a6f5008389b9663cf08cf856785627fbc172091"} Feb 16 21:55:48 crc kubenswrapper[4792]: I0216 21:55:48.463922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:48 crc kubenswrapper[4792]: I0216 21:55:48.482253 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" podStartSLOduration=3.555606377 podStartE2EDuration="45.482236832s" podCreationTimestamp="2026-02-16 21:55:03 +0000 UTC" firstStartedPulling="2026-02-16 21:55:05.501788028 +0000 UTC m=+1038.155066919" lastFinishedPulling="2026-02-16 21:55:47.428418483 +0000 UTC m=+1080.081697374" observedRunningTime="2026-02-16 21:55:48.478351517 +0000 UTC m=+1081.131630408" watchObservedRunningTime="2026-02-16 21:55:48.482236832 +0000 UTC m=+1081.135515723" Feb 16 21:55:49 crc kubenswrapper[4792]: I0216 21:55:49.258828 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-d52s2" Feb 16 21:55:53 crc kubenswrapper[4792]: I0216 21:55:53.773660 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-n9g6q" Feb 16 21:55:53 crc kubenswrapper[4792]: I0216 21:55:53.991046 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gsjf4" Feb 16 21:55:53 crc kubenswrapper[4792]: I0216 21:55:53.992027 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-xl8k2" Feb 16 21:55:54 crc kubenswrapper[4792]: I0216 21:55:54.128779 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8fcb2" Feb 16 21:55:54 crc kubenswrapper[4792]: I0216 21:55:54.667668 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-79996fd568-rkdpn" Feb 16 21:56:01 crc kubenswrapper[4792]: I0216 21:56:01.531919 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:56:01 crc kubenswrapper[4792]: I0216 21:56:01.532392 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.062765 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.068926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.071016 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.071845 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5nkwn" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.071989 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.072104 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.092411 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.172569 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.174160 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.177563 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.197915 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.201439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.202089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zjp\" (UniqueName: \"kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.303352 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.303412 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dczw\" (UniqueName: \"kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.303437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.303517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zjp\" (UniqueName: \"kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.303549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.305375 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.326806 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zjp\" (UniqueName: \"kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp\") pod \"dnsmasq-dns-675f4bcbfc-ncgz2\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.388581 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.405259 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.405319 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dczw\" (UniqueName: \"kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.405416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.406204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.406417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.424724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dczw\" (UniqueName: \"kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw\") pod \"dnsmasq-dns-78dd6ddcc-8tbg8\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.492261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:15 crc kubenswrapper[4792]: I0216 21:56:15.922149 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:16 crc kubenswrapper[4792]: I0216 21:56:16.037557 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:16 crc kubenswrapper[4792]: I0216 21:56:16.716709 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" event={"ID":"6074b703-7b92-4cb8-96ed-6a80dbdbce7d","Type":"ContainerStarted","Data":"22e2d05616a1feb969c10e93d0085227b22946f453981c4b82ab172b90d743d7"} Feb 16 21:56:16 crc kubenswrapper[4792]: I0216 21:56:16.719137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" event={"ID":"c08c9e6a-bd47-4daa-b9e7-0209b5811652","Type":"ContainerStarted","Data":"db379b175095406430c94796f331ae3d658fe050c4e7beede531673f111e7255"} Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.756535 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.784192 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.791158 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.818781 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.957978 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.958459 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgsh\" (UniqueName: \"kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:17 crc kubenswrapper[4792]: I0216 21:56:17.958491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.062330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.062470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgsh\" (UniqueName: \"kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.062496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.063514 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.064554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.093638 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.099077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgsh\" (UniqueName: \"kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh\") pod \"dnsmasq-dns-666b6646f7-ngn6b\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.124830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.128225 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.140279 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.174586 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.269461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.269533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt88h\" (UniqueName: \"kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.269581 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.371561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.371959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt88h\" (UniqueName: \"kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.372402 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.372990 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.373202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.410254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt88h\" (UniqueName: \"kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h\") pod \"dnsmasq-dns-57d769cc4f-pv9jk\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.497276 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:56:18 crc kubenswrapper[4792]: I0216 21:56:18.742965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:18 crc kubenswrapper[4792]: W0216 21:56:18.756912 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8af098de_cb86_4e2e_9871_9f43335daa16.slice/crio-bd2d4b528d45744de036205ea9ef36859a093755920347c965c51151acf875dd WatchSource:0}: Error finding container bd2d4b528d45744de036205ea9ef36859a093755920347c965c51151acf875dd: Status 404 returned error can't find the container with id bd2d4b528d45744de036205ea9ef36859a093755920347c965c51151acf875dd Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.917102 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.919965 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.932142 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.932315 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.932514 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.932683 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.932842 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gd5km" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.933007 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.949454 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.964320 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.967342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.983750 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.985724 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:18.994910 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.005001 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.035744 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098884 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098938 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.098982 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099021 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099051 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099081 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099148 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099184 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099264 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8ch\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099366 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5l7v\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099384 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099401 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099422 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099444 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099462 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9lj\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099509 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099555 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099618 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.099658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.133250 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201650 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201683 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201735 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201793 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8ch\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201813 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201846 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5l7v\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201861 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201881 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9lj\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.201995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202009 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.202111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.203731 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.203979 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.204515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.205399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.205434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.205668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.207231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.207505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.208391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.208991 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.209234 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.211005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.211286 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.211376 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.212179 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.212388 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.212406 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.212459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.215495 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.215800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.216630 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.216668 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ada0102548212a6fc40a49ae1a277fc6184298bf2db5d525ba55233f2962106/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.220804 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.220933 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.220954 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20c7bc1850b81174e9caedc70a44c7496e9450066847b70ee49f2f7f9bc6c364/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.221168 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.221194 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4a7b9fb20bf9a324e2b8a4fd513a909868f60c1cc47520451461303a70b0b164/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.223304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.225348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.225732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.226357 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5l7v\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.227007 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.234047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.238679 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9lj\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.240882 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8ch\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.245612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.257725 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.261492 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.263509 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.266182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.266502 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.267477 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.267902 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k5hbt" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.268150 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.268352 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.268581 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.269241 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.280027 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.303453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.309216 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.328331 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407369 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407483 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vnwv\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407505 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.407531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.509517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vnwv\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.509909 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.509954 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510062 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510084 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.510194 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.511360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.512300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.512492 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.512727 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.513298 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.515535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.519823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.525000 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.525044 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/408ec3f2e4754699964c8e323d7cd2d28ec9bc48e0167cd7e040036a16df5c2f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.527789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vnwv\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.530231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.530712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.568481 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.598679 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.631245 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.824484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" event={"ID":"c99c9de0-8ff3-480c-a57c-85cbc7cfb680","Type":"ContainerStarted","Data":"1ed6d92376a2419bb68c20a3c90a13727b4b73c9453f6e6151c0c3776ce58380"} Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:19.838749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" event={"ID":"8af098de-cb86-4e2e-9871-9f43335daa16","Type":"ContainerStarted","Data":"bd2d4b528d45744de036205ea9ef36859a093755920347c965c51151acf875dd"} Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.457327 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.459488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.465282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.465500 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.465746 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-24whs" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.465940 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.471813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.472482 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.543508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.543855 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.543907 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.543926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b4c\" (UniqueName: \"kubernetes.io/projected/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kube-api-access-q7b4c\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.544002 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.544111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.544353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.544424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2951c85-d8c7-4273-8641-879c6168a05e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2951c85-d8c7-4273-8641-879c6168a05e\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646618 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646689 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646709 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b4c\" (UniqueName: \"kubernetes.io/projected/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kube-api-access-q7b4c\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646835 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646907 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.646930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2951c85-d8c7-4273-8641-879c6168a05e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2951c85-d8c7-4273-8641-879c6168a05e\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.649246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.651724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.651874 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.656186 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.656228 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2951c85-d8c7-4273-8641-879c6168a05e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2951c85-d8c7-4273-8641-879c6168a05e\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/61df1d3873caa1f2b6bfdb4ce04618621621700aea3f4bc76592c3d0a95d63bf/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.656315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.661029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.671238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.682322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b4c\" (UniqueName: \"kubernetes.io/projected/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-kube-api-access-q7b4c\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.691290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce68e433-fd1b-4a65-84e2-33ecf84fc4ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.693496 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.708477 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.723268 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:56:20 crc kubenswrapper[4792]: W0216 21:56:20.763302 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod659cd2b3_5d9d_4992_acf8_385acdbbc443.slice/crio-25b6ea188d50072778dee7cf23785d88b26c3075ed7619470b2781e2036e6a7d WatchSource:0}: Error finding container 25b6ea188d50072778dee7cf23785d88b26c3075ed7619470b2781e2036e6a7d: Status 404 returned error can't find the container with id 25b6ea188d50072778dee7cf23785d88b26c3075ed7619470b2781e2036e6a7d Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.798138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2951c85-d8c7-4273-8641-879c6168a05e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2951c85-d8c7-4273-8641-879c6168a05e\") pod \"openstack-galera-0\" (UID: \"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea\") " pod="openstack/openstack-galera-0" Feb 16 21:56:20 crc kubenswrapper[4792]: W0216 21:56:20.814173 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b0b0738_c9c3_4b4f_86a2_8bb113270613.slice/crio-4e76442976f4fe438029d0ca9d3c5049b91d5c4914ba80f2128fd10dc25f281a WatchSource:0}: Error finding container 4e76442976f4fe438029d0ca9d3c5049b91d5c4914ba80f2128fd10dc25f281a: Status 404 returned error can't find the container with id 4e76442976f4fe438029d0ca9d3c5049b91d5c4914ba80f2128fd10dc25f281a Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.867585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerStarted","Data":"e7de349a9866bcba073cd393c7db26068162da598eeec123b1c269bea2d105b3"} Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.870279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerStarted","Data":"941615da69a5130fa41d1c9d9762bc68d30a50200b0878a1780b98300add0963"} Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.876996 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerStarted","Data":"4e76442976f4fe438029d0ca9d3c5049b91d5c4914ba80f2128fd10dc25f281a"} Feb 16 21:56:20 crc kubenswrapper[4792]: I0216 21:56:20.878772 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerStarted","Data":"25b6ea188d50072778dee7cf23785d88b26c3075ed7619470b2781e2036e6a7d"} Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.094626 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.610946 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.846371 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.851103 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.868307 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.868568 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.868746 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vmdv6" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.868948 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.875943 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.896908 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.897014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.897207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v96c\" (UniqueName: \"kubernetes.io/projected/07ce522d-6acb-4c52-aa4a-5997916ce345-kube-api-access-8v96c\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.897407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.897535 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.910589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.910646 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.910711 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.955294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea","Type":"ContainerStarted","Data":"a572f18c296ce30e0eb650cb1071c4536ccc168153eeda2f891754dc9e32c356"} Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.980357 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.981728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.986268 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.995175 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.995365 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 21:56:21 crc kubenswrapper[4792]: I0216 21:56:21.995679 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mbs7d" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.011878 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-combined-ca-bundle\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.011931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012188 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-memcached-tls-certs\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012264 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012280 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-kolla-config\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012379 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcff\" (UniqueName: \"kubernetes.io/projected/356c7c8e-30ec-45a3-a276-b8cca48b4774-kube-api-access-gwcff\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012445 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.012493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v96c\" (UniqueName: \"kubernetes.io/projected/07ce522d-6acb-4c52-aa4a-5997916ce345-kube-api-access-8v96c\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.013474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-config-data\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.013939 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.014983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.019976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07ce522d-6acb-4c52-aa4a-5997916ce345-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.032363 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.034897 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07b4d7050bc0c694e882604a5e4ea8403c0a8dcaafd654336e3053995902b9a8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.036868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07ce522d-6acb-4c52-aa4a-5997916ce345-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.042354 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v96c\" (UniqueName: \"kubernetes.io/projected/07ce522d-6acb-4c52-aa4a-5997916ce345-kube-api-access-8v96c\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.060467 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.067493 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ce522d-6acb-4c52-aa4a-5997916ce345-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.079449 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6812773f-c53d-4ab9-9cc1-394ab3d6b53a\") pod \"openstack-cell1-galera-0\" (UID: \"07ce522d-6acb-4c52-aa4a-5997916ce345\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.115795 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-kolla-config\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.115849 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwcff\" (UniqueName: \"kubernetes.io/projected/356c7c8e-30ec-45a3-a276-b8cca48b4774-kube-api-access-gwcff\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.116118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-config-data\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.116175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-combined-ca-bundle\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.116459 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-memcached-tls-certs\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.117539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-kolla-config\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.117555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/356c7c8e-30ec-45a3-a276-b8cca48b4774-config-data\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.120776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-memcached-tls-certs\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.121156 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c7c8e-30ec-45a3-a276-b8cca48b4774-combined-ca-bundle\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.147434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwcff\" (UniqueName: \"kubernetes.io/projected/356c7c8e-30ec-45a3-a276-b8cca48b4774-kube-api-access-gwcff\") pod \"memcached-0\" (UID: \"356c7c8e-30ec-45a3-a276-b8cca48b4774\") " pod="openstack/memcached-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.196934 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:56:22 crc kubenswrapper[4792]: I0216 21:56:22.332194 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.044424 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.076848 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.391783 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.394859 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.402954 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5q29w" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.420479 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.498185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2xx\" (UniqueName: \"kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx\") pod \"kube-state-metrics-0\" (UID: \"97394c7a-06f3-433b-84dd-7ae885a8753d\") " pod="openstack/kube-state-metrics-0" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.600894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2xx\" (UniqueName: \"kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx\") pod \"kube-state-metrics-0\" (UID: \"97394c7a-06f3-433b-84dd-7ae885a8753d\") " pod="openstack/kube-state-metrics-0" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.646352 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2xx\" (UniqueName: \"kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx\") pod \"kube-state-metrics-0\" (UID: \"97394c7a-06f3-433b-84dd-7ae885a8753d\") " pod="openstack/kube-state-metrics-0" Feb 16 21:56:24 crc kubenswrapper[4792]: I0216 21:56:24.740854 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.289192 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.290537 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.294996 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-q9nh6" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.295216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.306042 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.431697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.432058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89mb\" (UniqueName: \"kubernetes.io/projected/6dea83c6-c1d5-4b8e-a70c-3184a366721a-kube-api-access-b89mb\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.538257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.580020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89mb\" (UniqueName: \"kubernetes.io/projected/6dea83c6-c1d5-4b8e-a70c-3184a366721a-kube-api-access-b89mb\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: E0216 21:56:25.564435 4792 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 16 21:56:25 crc kubenswrapper[4792]: E0216 21:56:25.580714 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert podName:6dea83c6-c1d5-4b8e-a70c-3184a366721a nodeName:}" failed. No retries permitted until 2026-02-16 21:56:26.080697248 +0000 UTC m=+1118.733976139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert") pod "observability-ui-dashboards-66cbf594b5-8nqmm" (UID: "6dea83c6-c1d5-4b8e-a70c-3184a366721a") : secret "observability-ui-dashboards" not found Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.631522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89mb\" (UniqueName: \"kubernetes.io/projected/6dea83c6-c1d5-4b8e-a70c-3184a366721a-kube-api-access-b89mb\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.641698 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.658431 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.664525 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.664720 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.664836 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.664905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbcwk" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.666481 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.666634 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.666769 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.686483 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.691277 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bc994d6fc-zlcp4"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699454 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699644 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699880 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.699954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.700223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.700268 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvz7\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.700342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.700586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.805778 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.813111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.813176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvz7\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.814608 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815223 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815504 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815549 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815690 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.815743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.816557 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.817461 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc994d6fc-zlcp4"] Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.818181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.846541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.846775 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.849972 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.850008 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0c450b835612ea0ffc6154278231fd6293d2b9aab214db327cd461039eaa73be/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.867115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.867953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.868652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvz7\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.870333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.920709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-oauth-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.920978 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.921022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-trusted-ca-bundle\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.921284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6nc\" (UniqueName: \"kubernetes.io/projected/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-kube-api-access-xt6nc\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.921544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-oauth-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.921587 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-service-ca\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.921648 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:25 crc kubenswrapper[4792]: I0216 21:56:25.941148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.014097 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-oauth-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025649 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-trusted-ca-bundle\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6nc\" (UniqueName: \"kubernetes.io/projected/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-kube-api-access-xt6nc\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025925 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-oauth-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.025959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-service-ca\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.027158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-service-ca\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.027272 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-trusted-ca-bundle\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.027932 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.028366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-oauth-serving-cert\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.028527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.033155 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-console-oauth-config\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.065108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6nc\" (UniqueName: \"kubernetes.io/projected/f0aa11f8-95af-4212-80c4-c5f59a05ddc1-kube-api-access-xt6nc\") pod \"console-bc994d6fc-zlcp4\" (UID: \"f0aa11f8-95af-4212-80c4-c5f59a05ddc1\") " pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.138279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.143498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dea83c6-c1d5-4b8e-a70c-3184a366721a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8nqmm\" (UID: \"6dea83c6-c1d5-4b8e-a70c-3184a366721a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.233362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" Feb 16 21:56:26 crc kubenswrapper[4792]: I0216 21:56:26.326674 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.808494 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q4gs"] Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.810094 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.815550 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-72rl6" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.815558 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.815865 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.824121 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cfzsw"] Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.826428 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.846260 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs"] Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.861982 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cfzsw"] Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.977700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-ovn-controller-tls-certs\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.977790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60d2ecc7-d6a4-4c05-be72-ee4df484e081-scripts\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.977872 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-etc-ovs\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.977946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc8ee070-8557-4708-a58f-7e5899ed206b-scripts\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkpj\" (UniqueName: \"kubernetes.io/projected/fc8ee070-8557-4708-a58f-7e5899ed206b-kube-api-access-4hkpj\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978069 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-combined-ca-bundle\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-log\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978107 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-log-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxx9h\" (UniqueName: \"kubernetes.io/projected/60d2ecc7-d6a4-4c05-be72-ee4df484e081-kube-api-access-pxx9h\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978204 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-lib\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-run\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:27 crc kubenswrapper[4792]: I0216 21:56:27.978260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60d2ecc7-d6a4-4c05-be72-ee4df484e081-scripts\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081068 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-etc-ovs\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc8ee070-8557-4708-a58f-7e5899ed206b-scripts\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkpj\" (UniqueName: \"kubernetes.io/projected/fc8ee070-8557-4708-a58f-7e5899ed206b-kube-api-access-4hkpj\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-combined-ca-bundle\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-log\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-log-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxx9h\" (UniqueName: \"kubernetes.io/projected/60d2ecc7-d6a4-4c05-be72-ee4df484e081-kube-api-access-pxx9h\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-lib\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-run\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.081352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-ovn-controller-tls-certs\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.083619 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60d2ecc7-d6a4-4c05-be72-ee4df484e081-scripts\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.083948 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-etc-ovs\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.085527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc8ee070-8557-4708-a58f-7e5899ed206b-scripts\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-log-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086420 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-log\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-run\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc8ee070-8557-4708-a58f-7e5899ed206b-var-run-ovn\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.086881 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/60d2ecc7-d6a4-4c05-be72-ee4df484e081-var-lib\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.093136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-combined-ca-bundle\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.093143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8ee070-8557-4708-a58f-7e5899ed206b-ovn-controller-tls-certs\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.112587 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxx9h\" (UniqueName: \"kubernetes.io/projected/60d2ecc7-d6a4-4c05-be72-ee4df484e081-kube-api-access-pxx9h\") pod \"ovn-controller-ovs-cfzsw\" (UID: \"60d2ecc7-d6a4-4c05-be72-ee4df484e081\") " pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.121383 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkpj\" (UniqueName: \"kubernetes.io/projected/fc8ee070-8557-4708-a58f-7e5899ed206b-kube-api-access-4hkpj\") pod \"ovn-controller-5q4gs\" (UID: \"fc8ee070-8557-4708-a58f-7e5899ed206b\") " pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.178787 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.210743 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.266883 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.270876 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.272778 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.275863 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.276019 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.276089 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.276203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bt8wc" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.276233 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386135 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386181 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-config\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386430 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg7p\" (UniqueName: \"kubernetes.io/projected/9b5affff-971a-4114-9a3a-2bbdace2e7b9-kube-api-access-fkg7p\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.386524 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488532 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkg7p\" (UniqueName: \"kubernetes.io/projected/9b5affff-971a-4114-9a3a-2bbdace2e7b9-kube-api-access-fkg7p\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488679 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488738 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488803 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-config\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488867 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.488968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.491525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-config\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.491641 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.492034 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b5affff-971a-4114-9a3a-2bbdace2e7b9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.498163 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.498199 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/85c6806d6ab06869cb2ed6038257c73d5d95e767ae06abd25b8fd721caa2a43f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.506464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.510731 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkg7p\" (UniqueName: \"kubernetes.io/projected/9b5affff-971a-4114-9a3a-2bbdace2e7b9-kube-api-access-fkg7p\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.511324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.512113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5affff-971a-4114-9a3a-2bbdace2e7b9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.541669 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49d14dec-708f-494d-a5ca-ef3c5c5f689a\") pod \"ovsdbserver-nb-0\" (UID: \"9b5affff-971a-4114-9a3a-2bbdace2e7b9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:28 crc kubenswrapper[4792]: I0216 21:56:28.598855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:30 crc kubenswrapper[4792]: W0216 21:56:30.995657 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod356c7c8e_30ec_45a3_a276_b8cca48b4774.slice/crio-4a30b51873ae344314df07c0391eed90f69a53ab1a516903ff7c290279705273 WatchSource:0}: Error finding container 4a30b51873ae344314df07c0391eed90f69a53ab1a516903ff7c290279705273: Status 404 returned error can't find the container with id 4a30b51873ae344314df07c0391eed90f69a53ab1a516903ff7c290279705273 Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.144912 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.150413 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.154761 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.154914 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lqvsn" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.155054 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.155156 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.190587 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.234741 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07ce522d-6acb-4c52-aa4a-5997916ce345","Type":"ContainerStarted","Data":"d466441d68ad6b32ec9e4c1e25b1870b90940bde2a44f999f0011df8ed326212"} Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.236477 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"356c7c8e-30ec-45a3-a276-b8cca48b4774","Type":"ContainerStarted","Data":"4a30b51873ae344314df07c0391eed90f69a53ab1a516903ff7c290279705273"} Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250034 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250090 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250114 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtj4\" (UniqueName: \"kubernetes.io/projected/5891cbfc-31ff-494c-b21c-5de41da698c7-kube-api-access-rmtj4\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250237 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-config\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250258 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.250348 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.351736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352008 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352123 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352280 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtj4\" (UniqueName: \"kubernetes.io/projected/5891cbfc-31ff-494c-b21c-5de41da698c7-kube-api-access-rmtj4\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352491 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-config\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.352560 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.353228 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.353455 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-config\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.354121 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5891cbfc-31ff-494c-b21c-5de41da698c7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.354711 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.354743 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f386433de460b11138ac58dcb6c6ec170a53d06cc93a1dd2820427bf0cb891e0/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.358050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.358224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.358825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5891cbfc-31ff-494c-b21c-5de41da698c7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.370279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtj4\" (UniqueName: \"kubernetes.io/projected/5891cbfc-31ff-494c-b21c-5de41da698c7-kube-api-access-rmtj4\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.392155 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c55d27-c0e8-43d2-aa07-c9254e7312aa\") pod \"ovsdbserver-sb-0\" (UID: \"5891cbfc-31ff-494c-b21c-5de41da698c7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.478666 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.533119 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:56:31 crc kubenswrapper[4792]: I0216 21:56:31.533174 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:56:35 crc kubenswrapper[4792]: I0216 21:56:35.084499 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.124159 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.124949 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(9b0b0738-c9c3-4b4f-86a2-8bb113270613): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.126194 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.146677 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.146863 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v96c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(07ce522d-6acb-4c52-aa4a-5997916ce345): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.148045 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="07ce522d-6acb-4c52-aa4a-5997916ce345" Feb 16 21:56:44 crc kubenswrapper[4792]: I0216 21:56:44.374024 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97394c7a-06f3-433b-84dd-7ae885a8753d","Type":"ContainerStarted","Data":"6fd1a82d300eb2c649ce811c0e2caa1c9191a5c0cb945addf66c8981ce0a7b5f"} Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.375450 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" Feb 16 21:56:44 crc kubenswrapper[4792]: E0216 21:56:44.376072 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="07ce522d-6acb-4c52-aa4a-5997916ce345" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.039725 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.039897 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dczw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8tbg8_openstack(6074b703-7b92-4cb8-96ed-6a80dbdbce7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.041134 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" podUID="6074b703-7b92-4cb8-96ed-6a80dbdbce7d" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.068779 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.068922 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt88h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-pv9jk_openstack(c99c9de0-8ff3-480c-a57c-85cbc7cfb680): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.070029 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" podUID="c99c9de0-8ff3-480c-a57c-85cbc7cfb680" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.103979 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.104812 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfgsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ngn6b_openstack(8af098de-cb86-4e2e-9871-9f43335daa16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.106839 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" podUID="8af098de-cb86-4e2e-9871-9f43335daa16" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.143080 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.143211 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4zjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-ncgz2_openstack(c08c9e6a-bd47-4daa-b9e7-0209b5811652): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.144294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" podUID="c08c9e6a-bd47-4daa-b9e7-0209b5811652" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.396571 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" podUID="8af098de-cb86-4e2e-9871-9f43335daa16" Feb 16 21:56:45 crc kubenswrapper[4792]: E0216 21:56:45.396739 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" podUID="c99c9de0-8ff3-480c-a57c-85cbc7cfb680" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.118075 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.203009 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5891cbfc_31ff_494c_b21c_5de41da698c7.slice/crio-8f7b25d2705172b7ca8c5de0f743aac43374a77de7e7ec51bccb4c2d4270d930 WatchSource:0}: Error finding container 8f7b25d2705172b7ca8c5de0f743aac43374a77de7e7ec51bccb4c2d4270d930: Status 404 returned error can't find the container with id 8f7b25d2705172b7ca8c5de0f743aac43374a77de7e7ec51bccb4c2d4270d930 Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.303847 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.381933 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.385212 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config\") pod \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.385283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4zjp\" (UniqueName: \"kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp\") pod \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\" (UID: \"c08c9e6a-bd47-4daa-b9e7-0209b5811652\") " Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.386550 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config" (OuterVolumeSpecName: "config") pod "c08c9e6a-bd47-4daa-b9e7-0209b5811652" (UID: "c08c9e6a-bd47-4daa-b9e7-0209b5811652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.394299 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp" (OuterVolumeSpecName: "kube-api-access-w4zjp") pod "c08c9e6a-bd47-4daa-b9e7-0209b5811652" (UID: "c08c9e6a-bd47-4daa-b9e7-0209b5811652"). InnerVolumeSpecName "kube-api-access-w4zjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.398244 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cfzsw"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.415172 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5891cbfc-31ff-494c-b21c-5de41da698c7","Type":"ContainerStarted","Data":"8f7b25d2705172b7ca8c5de0f743aac43374a77de7e7ec51bccb4c2d4270d930"} Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.418432 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"356c7c8e-30ec-45a3-a276-b8cca48b4774","Type":"ContainerStarted","Data":"e8fa587a90389ffdc904239f3c14080625edd1be49ca1c2a65161b862e344b42"} Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.418653 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.420327 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" event={"ID":"c08c9e6a-bd47-4daa-b9e7-0209b5811652","Type":"ContainerDied","Data":"db379b175095406430c94796f331ae3d658fe050c4e7beede531673f111e7255"} Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.420409 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ncgz2" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.440627 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.454671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.472098 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.341423344 podStartE2EDuration="25.472081095s" podCreationTimestamp="2026-02-16 21:56:21 +0000 UTC" firstStartedPulling="2026-02-16 21:56:31.009435204 +0000 UTC m=+1123.662714105" lastFinishedPulling="2026-02-16 21:56:45.140092975 +0000 UTC m=+1137.793371856" observedRunningTime="2026-02-16 21:56:46.445123564 +0000 UTC m=+1139.098402455" watchObservedRunningTime="2026-02-16 21:56:46.472081095 +0000 UTC m=+1139.125359986" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.484498 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc994d6fc-zlcp4"] Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.485453 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dea83c6_c1d5_4b8e_a70c_3184a366721a.slice/crio-ade9129a411cf671970a8f69318555baa08036cc7b465acb1a58e4c7badcbdeb WatchSource:0}: Error finding container ade9129a411cf671970a8f69318555baa08036cc7b465acb1a58e4c7badcbdeb: Status 404 returned error can't find the container with id ade9129a411cf671970a8f69318555baa08036cc7b465acb1a58e4c7badcbdeb Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.488572 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08c9e6a-bd47-4daa-b9e7-0209b5811652-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.488634 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4zjp\" (UniqueName: \"kubernetes.io/projected/c08c9e6a-bd47-4daa-b9e7-0209b5811652-kube-api-access-w4zjp\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.497515 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d2ecc7_d6a4_4c05_be72_ee4df484e081.slice/crio-94e3c39344c4b1a2bfa0a1e48d6a311d34e17175b1a56b9efb438fd89b964467 WatchSource:0}: Error finding container 94e3c39344c4b1a2bfa0a1e48d6a311d34e17175b1a56b9efb438fd89b964467: Status 404 returned error can't find the container with id 94e3c39344c4b1a2bfa0a1e48d6a311d34e17175b1a56b9efb438fd89b964467 Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.507398 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.563111 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.570451 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.578080 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ncgz2"] Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.589625 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dczw\" (UniqueName: \"kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw\") pod \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.589707 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config\") pod \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.589830 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc\") pod \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\" (UID: \"6074b703-7b92-4cb8-96ed-6a80dbdbce7d\") " Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.590734 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6074b703-7b92-4cb8-96ed-6a80dbdbce7d" (UID: "6074b703-7b92-4cb8-96ed-6a80dbdbce7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.591418 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config" (OuterVolumeSpecName: "config") pod "6074b703-7b92-4cb8-96ed-6a80dbdbce7d" (UID: "6074b703-7b92-4cb8-96ed-6a80dbdbce7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.594743 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw" (OuterVolumeSpecName: "kube-api-access-6dczw") pod "6074b703-7b92-4cb8-96ed-6a80dbdbce7d" (UID: "6074b703-7b92-4cb8-96ed-6a80dbdbce7d"). InnerVolumeSpecName "kube-api-access-6dczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.602209 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc8ee070_8557_4708_a58f_7e5899ed206b.slice/crio-b7c4909632b0714ab62485880ebe86cc7c71a55acfc3b3c8e99e866baaa4fdc9 WatchSource:0}: Error finding container b7c4909632b0714ab62485880ebe86cc7c71a55acfc3b3c8e99e866baaa4fdc9: Status 404 returned error can't find the container with id b7c4909632b0714ab62485880ebe86cc7c71a55acfc3b3c8e99e866baaa4fdc9 Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.609355 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8bd9c3b_0357_4270_8e43_6d6a3da3534d.slice/crio-4bbba6826bb12f8c042a6d488233583006d9b51f95bd062cea3dd055ac003dd5 WatchSource:0}: Error finding container 4bbba6826bb12f8c042a6d488233583006d9b51f95bd062cea3dd055ac003dd5: Status 404 returned error can't find the container with id 4bbba6826bb12f8c042a6d488233583006d9b51f95bd062cea3dd055ac003dd5 Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.692157 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dczw\" (UniqueName: \"kubernetes.io/projected/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-kube-api-access-6dczw\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.692190 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:46 crc kubenswrapper[4792]: I0216 21:56:46.692204 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6074b703-7b92-4cb8-96ed-6a80dbdbce7d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:46 crc kubenswrapper[4792]: W0216 21:56:46.695617 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5affff_971a_4114_9a3a_2bbdace2e7b9.slice/crio-f109fe265c93fd5bd7419e1825e666b461428cf6a5d00c1a7d7c6d4a903a102d WatchSource:0}: Error finding container f109fe265c93fd5bd7419e1825e666b461428cf6a5d00c1a7d7c6d4a903a102d: Status 404 returned error can't find the container with id f109fe265c93fd5bd7419e1825e666b461428cf6a5d00c1a7d7c6d4a903a102d Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.428926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs" event={"ID":"fc8ee070-8557-4708-a58f-7e5899ed206b","Type":"ContainerStarted","Data":"b7c4909632b0714ab62485880ebe86cc7c71a55acfc3b3c8e99e866baaa4fdc9"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.430340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc994d6fc-zlcp4" event={"ID":"f0aa11f8-95af-4212-80c4-c5f59a05ddc1","Type":"ContainerStarted","Data":"36805060ded5448da9b91bffce193f18bb0c2957599dd079aaab7ebfa9322fce"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.431258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9b5affff-971a-4114-9a3a-2bbdace2e7b9","Type":"ContainerStarted","Data":"f109fe265c93fd5bd7419e1825e666b461428cf6a5d00c1a7d7c6d4a903a102d"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.432738 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea","Type":"ContainerStarted","Data":"47f6cd383d7681973ea4615333932b882f8261bb0021759ae87eb80f74d08fbb"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.433813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cfzsw" event={"ID":"60d2ecc7-d6a4-4c05-be72-ee4df484e081","Type":"ContainerStarted","Data":"94e3c39344c4b1a2bfa0a1e48d6a311d34e17175b1a56b9efb438fd89b964467"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.435978 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.435981 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8tbg8" event={"ID":"6074b703-7b92-4cb8-96ed-6a80dbdbce7d","Type":"ContainerDied","Data":"22e2d05616a1feb969c10e93d0085227b22946f453981c4b82ab172b90d743d7"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.437359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerStarted","Data":"4bbba6826bb12f8c042a6d488233583006d9b51f95bd062cea3dd055ac003dd5"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.439086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" event={"ID":"6dea83c6-c1d5-4b8e-a70c-3184a366721a","Type":"ContainerStarted","Data":"ade9129a411cf671970a8f69318555baa08036cc7b465acb1a58e4c7badcbdeb"} Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.513105 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:47 crc kubenswrapper[4792]: I0216 21:56:47.521753 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8tbg8"] Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.045557 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6074b703-7b92-4cb8-96ed-6a80dbdbce7d" path="/var/lib/kubelet/pods/6074b703-7b92-4cb8-96ed-6a80dbdbce7d/volumes" Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.046375 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08c9e6a-bd47-4daa-b9e7-0209b5811652" path="/var/lib/kubelet/pods/c08c9e6a-bd47-4daa-b9e7-0209b5811652/volumes" Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.450928 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97394c7a-06f3-433b-84dd-7ae885a8753d","Type":"ContainerStarted","Data":"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd"} Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.451217 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.454650 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerStarted","Data":"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2"} Feb 16 21:56:48 crc kubenswrapper[4792]: I0216 21:56:48.474780 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.829826658 podStartE2EDuration="24.47476627s" podCreationTimestamp="2026-02-16 21:56:24 +0000 UTC" firstStartedPulling="2026-02-16 21:56:44.174528954 +0000 UTC m=+1136.827807855" lastFinishedPulling="2026-02-16 21:56:47.819468576 +0000 UTC m=+1140.472747467" observedRunningTime="2026-02-16 21:56:48.46874702 +0000 UTC m=+1141.122025911" watchObservedRunningTime="2026-02-16 21:56:48.47476627 +0000 UTC m=+1141.128045161" Feb 16 21:56:49 crc kubenswrapper[4792]: I0216 21:56:49.468970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc994d6fc-zlcp4" event={"ID":"f0aa11f8-95af-4212-80c4-c5f59a05ddc1","Type":"ContainerStarted","Data":"e23f56a13dd981023696f08ba3c328845b59416b766b737ebcc93661bd78da58"} Feb 16 21:56:49 crc kubenswrapper[4792]: I0216 21:56:49.473365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerStarted","Data":"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a"} Feb 16 21:56:49 crc kubenswrapper[4792]: I0216 21:56:49.478708 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerStarted","Data":"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228"} Feb 16 21:56:49 crc kubenswrapper[4792]: I0216 21:56:49.536816 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bc994d6fc-zlcp4" podStartSLOduration=24.536790981 podStartE2EDuration="24.536790981s" podCreationTimestamp="2026-02-16 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:56:49.497285135 +0000 UTC m=+1142.150564036" watchObservedRunningTime="2026-02-16 21:56:49.536790981 +0000 UTC m=+1142.190069872" Feb 16 21:56:51 crc kubenswrapper[4792]: I0216 21:56:51.493954 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce68e433-fd1b-4a65-84e2-33ecf84fc4ea" containerID="47f6cd383d7681973ea4615333932b882f8261bb0021759ae87eb80f74d08fbb" exitCode=0 Feb 16 21:56:51 crc kubenswrapper[4792]: I0216 21:56:51.494194 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea","Type":"ContainerDied","Data":"47f6cd383d7681973ea4615333932b882f8261bb0021759ae87eb80f74d08fbb"} Feb 16 21:56:52 crc kubenswrapper[4792]: I0216 21:56:52.333526 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.512320 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" event={"ID":"6dea83c6-c1d5-4b8e-a70c-3184a366721a","Type":"ContainerStarted","Data":"c03d152b574adf7dba4fdf0646eee3cd16276d5faec6df543cfbf0224f02eee5"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.515975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs" event={"ID":"fc8ee070-8557-4708-a58f-7e5899ed206b","Type":"ContainerStarted","Data":"3ff1c2be265e8e4409e9042dd3cb62c1bb33eb89465a37cae409383ebc8e734b"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.516415 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5q4gs" Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.518559 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5891cbfc-31ff-494c-b21c-5de41da698c7","Type":"ContainerStarted","Data":"45db4b7398ce3662f93438435d76c1434ed724b3b2aa4b479927f6609b65abeb"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.520175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9b5affff-971a-4114-9a3a-2bbdace2e7b9","Type":"ContainerStarted","Data":"ee2dbcd72efd4ae50b66edbc18db5d7f0ea12d1a9bcf199f65ec5460f54df004"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.522074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce68e433-fd1b-4a65-84e2-33ecf84fc4ea","Type":"ContainerStarted","Data":"886cd6e28e6290e09f5438f3fbc1b32e08e9cd25a1ca3567d0e6665d81836598"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.530810 4792 generic.go:334] "Generic (PLEG): container finished" podID="60d2ecc7-d6a4-4c05-be72-ee4df484e081" containerID="7db4bd9e01834aba4a6cf875965ca5724d20bdc569498352bb40673c0bda665d" exitCode=0 Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.530870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cfzsw" event={"ID":"60d2ecc7-d6a4-4c05-be72-ee4df484e081","Type":"ContainerDied","Data":"7db4bd9e01834aba4a6cf875965ca5724d20bdc569498352bb40673c0bda665d"} Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.538482 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8nqmm" podStartSLOduration=22.933449905 podStartE2EDuration="28.538459664s" podCreationTimestamp="2026-02-16 21:56:25 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.491135855 +0000 UTC m=+1139.144414746" lastFinishedPulling="2026-02-16 21:56:52.096145614 +0000 UTC m=+1144.749424505" observedRunningTime="2026-02-16 21:56:53.529628048 +0000 UTC m=+1146.182906949" watchObservedRunningTime="2026-02-16 21:56:53.538459664 +0000 UTC m=+1146.191738575" Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.550864 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.804435489 podStartE2EDuration="34.550847155s" podCreationTimestamp="2026-02-16 21:56:19 +0000 UTC" firstStartedPulling="2026-02-16 21:56:21.638324092 +0000 UTC m=+1114.291602983" lastFinishedPulling="2026-02-16 21:56:45.384735758 +0000 UTC m=+1138.038014649" observedRunningTime="2026-02-16 21:56:53.546115148 +0000 UTC m=+1146.199394039" watchObservedRunningTime="2026-02-16 21:56:53.550847155 +0000 UTC m=+1146.204126036" Feb 16 21:56:53 crc kubenswrapper[4792]: I0216 21:56:53.579235 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5q4gs" podStartSLOduration=21.090393901 podStartE2EDuration="26.579213234s" podCreationTimestamp="2026-02-16 21:56:27 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.604098975 +0000 UTC m=+1139.257377866" lastFinishedPulling="2026-02-16 21:56:52.092918308 +0000 UTC m=+1144.746197199" observedRunningTime="2026-02-16 21:56:53.569209016 +0000 UTC m=+1146.222487917" watchObservedRunningTime="2026-02-16 21:56:53.579213234 +0000 UTC m=+1146.232492125" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.551691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cfzsw" event={"ID":"60d2ecc7-d6a4-4c05-be72-ee4df484e081","Type":"ContainerStarted","Data":"5974f73e3ce4c6793459ebe480096eccb53b4fbe543d0250c8786a16518c191f"} Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.667182 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.760935 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.782727 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.784822 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.802241 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.890726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcn8t\" (UniqueName: \"kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.890796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.890831 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.994040 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcn8t\" (UniqueName: \"kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.994116 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.994154 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.995511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:54 crc kubenswrapper[4792]: I0216 21:56:54.996323 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.019883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcn8t\" (UniqueName: \"kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t\") pod \"dnsmasq-dns-7cb5889db5-g4799\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.134869 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.323492 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.418336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc\") pod \"8af098de-cb86-4e2e-9871-9f43335daa16\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.418850 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config\") pod \"8af098de-cb86-4e2e-9871-9f43335daa16\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.418945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfgsh\" (UniqueName: \"kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh\") pod \"8af098de-cb86-4e2e-9871-9f43335daa16\" (UID: \"8af098de-cb86-4e2e-9871-9f43335daa16\") " Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.422079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8af098de-cb86-4e2e-9871-9f43335daa16" (UID: "8af098de-cb86-4e2e-9871-9f43335daa16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.422354 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config" (OuterVolumeSpecName: "config") pod "8af098de-cb86-4e2e-9871-9f43335daa16" (UID: "8af098de-cb86-4e2e-9871-9f43335daa16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.428771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh" (OuterVolumeSpecName: "kube-api-access-tfgsh") pod "8af098de-cb86-4e2e-9871-9f43335daa16" (UID: "8af098de-cb86-4e2e-9871-9f43335daa16"). InnerVolumeSpecName "kube-api-access-tfgsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.528182 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.528414 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfgsh\" (UniqueName: \"kubernetes.io/projected/8af098de-cb86-4e2e-9871-9f43335daa16-kube-api-access-tfgsh\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.528424 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8af098de-cb86-4e2e-9871-9f43335daa16-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.578541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerStarted","Data":"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257"} Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.598822 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.598822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ngn6b" event={"ID":"8af098de-cb86-4e2e-9871-9f43335daa16","Type":"ContainerDied","Data":"bd2d4b528d45744de036205ea9ef36859a093755920347c965c51151acf875dd"} Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.615640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5891cbfc-31ff-494c-b21c-5de41da698c7","Type":"ContainerStarted","Data":"364410630ddacc528998edcec809f5cfb5aaa5ddbb15bdcdb4243b824ace74a7"} Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.669625 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.757319372 podStartE2EDuration="25.669582764s" podCreationTimestamp="2026-02-16 21:56:30 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.216318175 +0000 UTC m=+1138.869597056" lastFinishedPulling="2026-02-16 21:56:55.128581557 +0000 UTC m=+1147.781860448" observedRunningTime="2026-02-16 21:56:55.663912783 +0000 UTC m=+1148.317191674" watchObservedRunningTime="2026-02-16 21:56:55.669582764 +0000 UTC m=+1148.322861655" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.727401 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.744067 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ngn6b"] Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.807486 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.813683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.815891 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.819864 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.819918 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.819948 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mm9x7" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.837819 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.900247 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:56:55 crc kubenswrapper[4792]: W0216 21:56:55.906609 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1180cfb_8f6c_48eb_baec_1915b5ba377b.slice/crio-2999c742e5023721971de0cc40c26400b6d1a8b61406d613354ed625fa8c8c1e WatchSource:0}: Error finding container 2999c742e5023721971de0cc40c26400b6d1a8b61406d613354ed625fa8c8c1e: Status 404 returned error can't find the container with id 2999c742e5023721971de0cc40c26400b6d1a8b61406d613354ed625fa8c8c1e Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.934768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2e70adc-ae64-40cf-831a-924e82077836\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2e70adc-ae64-40cf-831a-924e82077836\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.935031 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ada762-95ad-4810-b5da-b4ca59652a45-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.935059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c542k\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-kube-api-access-c542k\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.935080 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-lock\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.935122 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-cache\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:55 crc kubenswrapper[4792]: I0216 21:56:55.935155 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.036751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2e70adc-ae64-40cf-831a-924e82077836\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2e70adc-ae64-40cf-831a-924e82077836\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.036921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ada762-95ad-4810-b5da-b4ca59652a45-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.037030 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-lock\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.037092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c542k\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-kube-api-access-c542k\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.037198 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-cache\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.037277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.037525 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.037585 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.037685 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift podName:e2ada762-95ad-4810-b5da-b4ca59652a45 nodeName:}" failed. No retries permitted until 2026-02-16 21:56:56.537669238 +0000 UTC m=+1149.190948129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift") pod "swift-storage-0" (UID: "e2ada762-95ad-4810-b5da-b4ca59652a45") : configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.037859 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-lock\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.038269 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e2ada762-95ad-4810-b5da-b4ca59652a45-cache\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.041058 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af098de-cb86-4e2e-9871-9f43335daa16" path="/var/lib/kubelet/pods/8af098de-cb86-4e2e-9871-9f43335daa16/volumes" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.045391 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ada762-95ad-4810-b5da-b4ca59652a45-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.055389 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.055546 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2e70adc-ae64-40cf-831a-924e82077836\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2e70adc-ae64-40cf-831a-924e82077836\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b46cb65029b128b2e39ee6b239e9db9ff65fd517d64bb2b3b33d1fca6892275/globalmount\"" pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.059547 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c542k\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-kube-api-access-c542k\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.153281 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qlqfk"] Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.185337 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.188524 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.190510 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.190537 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.192753 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qlqfk"] Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.210109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2e70adc-ae64-40cf-831a-924e82077836\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2e70adc-ae64-40cf-831a-924e82077836\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.245941 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246084 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246420 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.246825 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9pc\" (UniqueName: \"kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.327552 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.327792 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.333379 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.348979 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349031 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9pc\" (UniqueName: \"kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349178 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349254 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.349305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.350260 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.350665 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.351106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.353247 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.358217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.360094 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.377722 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9pc\" (UniqueName: \"kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc\") pod \"swift-ring-rebalance-qlqfk\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.478838 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.547978 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.553763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.553962 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.554009 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: E0216 21:56:56.554084 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift podName:e2ada762-95ad-4810-b5da-b4ca59652a45 nodeName:}" failed. No retries permitted until 2026-02-16 21:56:57.554062107 +0000 UTC m=+1150.207340998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift") pod "swift-storage-0" (UID: "e2ada762-95ad-4810-b5da-b4ca59652a45") : configmap "swift-ring-files" not found Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.623283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9b5affff-971a-4114-9a3a-2bbdace2e7b9","Type":"ContainerStarted","Data":"fd976d1686336bc4a24866266cfc025d443c3b1a78122e19aebea6da38ccd19f"} Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.636179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07ce522d-6acb-4c52-aa4a-5997916ce345","Type":"ContainerStarted","Data":"e7b0178410136ac4ce8f92da369a038428912236be569c2844aea11a8e3a6387"} Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.646782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cfzsw" event={"ID":"60d2ecc7-d6a4-4c05-be72-ee4df484e081","Type":"ContainerStarted","Data":"1c1a95b3b460557c267d07cb12d99750abda385ac0e94dd98d4982a41e6d62a2"} Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.647478 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.647512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.650714 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" event={"ID":"a1180cfb-8f6c-48eb-baec-1915b5ba377b","Type":"ContainerStarted","Data":"2999c742e5023721971de0cc40c26400b6d1a8b61406d613354ed625fa8c8c1e"} Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.661073 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bc994d6fc-zlcp4" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.682715 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.261006234 podStartE2EDuration="29.682694687s" podCreationTimestamp="2026-02-16 21:56:27 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.699348652 +0000 UTC m=+1139.352627543" lastFinishedPulling="2026-02-16 21:56:55.121037105 +0000 UTC m=+1147.774315996" observedRunningTime="2026-02-16 21:56:56.66784739 +0000 UTC m=+1149.321126281" watchObservedRunningTime="2026-02-16 21:56:56.682694687 +0000 UTC m=+1149.335973578" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.800619 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cfzsw" podStartSLOduration=24.213324746 podStartE2EDuration="29.80058357s" podCreationTimestamp="2026-02-16 21:56:27 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.504979005 +0000 UTC m=+1139.158257906" lastFinishedPulling="2026-02-16 21:56:52.092237839 +0000 UTC m=+1144.745516730" observedRunningTime="2026-02-16 21:56:56.78975661 +0000 UTC m=+1149.443035511" watchObservedRunningTime="2026-02-16 21:56:56.80058357 +0000 UTC m=+1149.453862461" Feb 16 21:56:56 crc kubenswrapper[4792]: I0216 21:56:56.853871 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.245418 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qlqfk"] Feb 16 21:56:57 crc kubenswrapper[4792]: W0216 21:56:57.248282 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbebd5c80_d002_49e6_ac52_d1d323b83801.slice/crio-e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed WatchSource:0}: Error finding container e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed: Status 404 returned error can't find the container with id e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.583427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:57 crc kubenswrapper[4792]: E0216 21:56:57.583641 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:56:57 crc kubenswrapper[4792]: E0216 21:56:57.583871 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:56:57 crc kubenswrapper[4792]: E0216 21:56:57.583940 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift podName:e2ada762-95ad-4810-b5da-b4ca59652a45 nodeName:}" failed. No retries permitted until 2026-02-16 21:56:59.583915137 +0000 UTC m=+1152.237194038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift") pod "swift-storage-0" (UID: "e2ada762-95ad-4810-b5da-b4ca59652a45") : configmap "swift-ring-files" not found Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.659135 4792 generic.go:334] "Generic (PLEG): container finished" podID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerID="962a728682e3e27c72b7f2212a36a602c6a1b375b009c2fff603afe3d02ff876" exitCode=0 Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.659193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" event={"ID":"a1180cfb-8f6c-48eb-baec-1915b5ba377b","Type":"ContainerDied","Data":"962a728682e3e27c72b7f2212a36a602c6a1b375b009c2fff603afe3d02ff876"} Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.664245 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qlqfk" event={"ID":"bebd5c80-d002-49e6-ac52-d1d323b83801","Type":"ContainerStarted","Data":"e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed"} Feb 16 21:56:57 crc kubenswrapper[4792]: I0216 21:56:57.666307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerStarted","Data":"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a"} Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.479122 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.524150 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.599644 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.599685 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.651281 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.683785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" event={"ID":"a1180cfb-8f6c-48eb-baec-1915b5ba377b","Type":"ContainerStarted","Data":"233ebfc702c2f3e288418d049b67403077dbe40386e4f66e071d3c60be08f77a"} Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.712669 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" podStartSLOduration=4.301949659 podStartE2EDuration="4.712646591s" podCreationTimestamp="2026-02-16 21:56:54 +0000 UTC" firstStartedPulling="2026-02-16 21:56:55.910082606 +0000 UTC m=+1148.563361487" lastFinishedPulling="2026-02-16 21:56:56.320779528 +0000 UTC m=+1148.974058419" observedRunningTime="2026-02-16 21:56:58.698935425 +0000 UTC m=+1151.352214316" watchObservedRunningTime="2026-02-16 21:56:58.712646591 +0000 UTC m=+1151.365925492" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.736922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.751328 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.914919 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.979424 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.981996 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:58 crc kubenswrapper[4792]: I0216 21:56:58.988106 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.019081 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.020167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.020323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.020409 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bsj\" (UniqueName: \"kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.020464 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.088841 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rzhpq"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.094564 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.099725 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.112776 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rzhpq"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.123163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bsj\" (UniqueName: \"kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.123224 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.123289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.123426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.126820 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.127439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.126076 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.184356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bsj\" (UniqueName: \"kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj\") pod \"dnsmasq-dns-57d65f699f-5wwfc\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a50771-3519-451e-af83-32d1da662062-config\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225346 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovs-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225369 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovn-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225394 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225481 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-combined-ca-bundle\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.225522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9gz\" (UniqueName: \"kubernetes.io/projected/76a50771-3519-451e-af83-32d1da662062-kube-api-access-jg9gz\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.272293 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.320321 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.329852 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9gz\" (UniqueName: \"kubernetes.io/projected/76a50771-3519-451e-af83-32d1da662062-kube-api-access-jg9gz\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.334360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a50771-3519-451e-af83-32d1da662062-config\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.334522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovs-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.334627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovn-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.334723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.335010 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-combined-ca-bundle\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.339241 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.340248 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovs-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.340709 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/76a50771-3519-451e-af83-32d1da662062-ovn-rundir\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.340893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a50771-3519-451e-af83-32d1da662062-config\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.344873 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.356759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.366408 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.375208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a50771-3519-451e-af83-32d1da662062-combined-ca-bundle\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.389824 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.394665 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.395698 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8bmh8" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.395859 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.395995 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.396240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9gz\" (UniqueName: \"kubernetes.io/projected/76a50771-3519-451e-af83-32d1da662062-kube-api-access-jg9gz\") pod \"ovn-controller-metrics-rzhpq\" (UID: \"76a50771-3519-451e-af83-32d1da662062\") " pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.408865 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.416845 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.417300 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rzhpq" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.433659 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436717 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436808 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436832 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436859 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw6n\" (UniqueName: \"kubernetes.io/projected/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-kube-api-access-gnw6n\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436881 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-config\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436938 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-scripts\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.436996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.437018 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.437033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579g4\" (UniqueName: \"kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.437063 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.437123 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.539494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.539844 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.539889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw6n\" (UniqueName: \"kubernetes.io/projected/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-kube-api-access-gnw6n\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.539920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-config\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540006 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-scripts\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540090 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540145 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579g4\" (UniqueName: \"kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.540319 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.541690 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.546181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.546999 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.547677 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.548794 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.552694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-config\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.552733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.553553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-scripts\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.563084 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.587735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw6n\" (UniqueName: \"kubernetes.io/projected/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-kube-api-access-gnw6n\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.591485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af85927-1a78-41d9-8d3d-cfef6f7f9d20-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6af85927-1a78-41d9-8d3d-cfef6f7f9d20\") " pod="openstack/ovn-northd-0" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.599324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579g4\" (UniqueName: \"kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4\") pod \"dnsmasq-dns-b8fbc5445-qfzrg\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.641677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:56:59 crc kubenswrapper[4792]: E0216 21:56:59.642831 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:56:59 crc kubenswrapper[4792]: E0216 21:56:59.642851 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:56:59 crc kubenswrapper[4792]: E0216 21:56:59.642894 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift podName:e2ada762-95ad-4810-b5da-b4ca59652a45 nodeName:}" failed. No retries permitted until 2026-02-16 21:57:03.642879268 +0000 UTC m=+1156.296158159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift") pod "swift-storage-0" (UID: "e2ada762-95ad-4810-b5da-b4ca59652a45") : configmap "swift-ring-files" not found Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.699645 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.817047 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:56:59 crc kubenswrapper[4792]: I0216 21:56:59.873500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:57:00 crc kubenswrapper[4792]: I0216 21:57:00.709709 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="dnsmasq-dns" containerID="cri-o://233ebfc702c2f3e288418d049b67403077dbe40386e4f66e071d3c60be08f77a" gracePeriod=10 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.094961 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.095029 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.195803 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.532151 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.532203 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.532244 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.533049 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.533101 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9" gracePeriod=600 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.738771 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9" exitCode=0 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.738811 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9"} Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.738860 4792 scope.go:117] "RemoveContainer" containerID="5420a3bd3715be693aa677b143ac196347b01bc4bf5c8c37000962c99194f7f7" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.740546 4792 generic.go:334] "Generic (PLEG): container finished" podID="07ce522d-6acb-4c52-aa4a-5997916ce345" containerID="e7b0178410136ac4ce8f92da369a038428912236be569c2844aea11a8e3a6387" exitCode=0 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.740611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07ce522d-6acb-4c52-aa4a-5997916ce345","Type":"ContainerDied","Data":"e7b0178410136ac4ce8f92da369a038428912236be569c2844aea11a8e3a6387"} Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.747504 4792 generic.go:334] "Generic (PLEG): container finished" podID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerID="233ebfc702c2f3e288418d049b67403077dbe40386e4f66e071d3c60be08f77a" exitCode=0 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.747577 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" event={"ID":"a1180cfb-8f6c-48eb-baec-1915b5ba377b","Type":"ContainerDied","Data":"233ebfc702c2f3e288418d049b67403077dbe40386e4f66e071d3c60be08f77a"} Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.756483 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerID="41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257" exitCode=0 Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.756550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerDied","Data":"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257"} Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.762061 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" event={"ID":"c99c9de0-8ff3-480c-a57c-85cbc7cfb680","Type":"ContainerDied","Data":"1ed6d92376a2419bb68c20a3c90a13727b4b73c9453f6e6151c0c3776ce58380"} Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.762341 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed6d92376a2419bb68c20a3c90a13727b4b73c9453f6e6151c0c3776ce58380" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.835693 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.880788 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.996086 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt88h\" (UniqueName: \"kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h\") pod \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.996987 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc\") pod \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " Feb 16 21:57:01 crc kubenswrapper[4792]: I0216 21:57:01.997534 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c99c9de0-8ff3-480c-a57c-85cbc7cfb680" (UID: "c99c9de0-8ff3-480c-a57c-85cbc7cfb680"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.000270 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config\") pod \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\" (UID: \"c99c9de0-8ff3-480c-a57c-85cbc7cfb680\") " Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.001215 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.001722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config" (OuterVolumeSpecName: "config") pod "c99c9de0-8ff3-480c-a57c-85cbc7cfb680" (UID: "c99c9de0-8ff3-480c-a57c-85cbc7cfb680"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.009576 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h" (OuterVolumeSpecName: "kube-api-access-rt88h") pod "c99c9de0-8ff3-480c-a57c-85cbc7cfb680" (UID: "c99c9de0-8ff3-480c-a57c-85cbc7cfb680"). InnerVolumeSpecName "kube-api-access-rt88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.103409 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.103733 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt88h\" (UniqueName: \"kubernetes.io/projected/c99c9de0-8ff3-480c-a57c-85cbc7cfb680-kube-api-access-rt88h\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.206840 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.306308 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcn8t\" (UniqueName: \"kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t\") pod \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.306360 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc\") pod \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.306780 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config\") pod \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\" (UID: \"a1180cfb-8f6c-48eb-baec-1915b5ba377b\") " Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.341763 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t" (OuterVolumeSpecName: "kube-api-access-jcn8t") pod "a1180cfb-8f6c-48eb-baec-1915b5ba377b" (UID: "a1180cfb-8f6c-48eb-baec-1915b5ba377b"). InnerVolumeSpecName "kube-api-access-jcn8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.371965 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config" (OuterVolumeSpecName: "config") pod "a1180cfb-8f6c-48eb-baec-1915b5ba377b" (UID: "a1180cfb-8f6c-48eb-baec-1915b5ba377b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.415042 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.415378 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcn8t\" (UniqueName: \"kubernetes.io/projected/a1180cfb-8f6c-48eb-baec-1915b5ba377b-kube-api-access-jcn8t\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.418274 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1180cfb-8f6c-48eb-baec-1915b5ba377b" (UID: "a1180cfb-8f6c-48eb-baec-1915b5ba377b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.516790 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1180cfb-8f6c-48eb-baec-1915b5ba377b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.617973 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6372-account-create-update-tsd5b"] Feb 16 21:57:02 crc kubenswrapper[4792]: E0216 21:57:02.618538 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="dnsmasq-dns" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.618558 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="dnsmasq-dns" Feb 16 21:57:02 crc kubenswrapper[4792]: E0216 21:57:02.618620 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="init" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.618631 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="init" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.618873 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" containerName="dnsmasq-dns" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.619845 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.625850 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.640837 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tzwjt"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.643216 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.663014 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6372-account-create-update-tsd5b"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.673123 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tzwjt"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.729574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.729697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.729720 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx6lw\" (UniqueName: \"kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.729809 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b292m\" (UniqueName: \"kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.796829 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.805724 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rzhpq"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.812819 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qlqfk" event={"ID":"bebd5c80-d002-49e6-ac52-d1d323b83801","Type":"ContainerStarted","Data":"24a8af67cd13e5538efc6f90d1698af657d195c007bb74ec286fc3106eb4d661"} Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.824537 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.831391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b292m\" (UniqueName: \"kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.831511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.831577 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.831609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx6lw\" (UniqueName: \"kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.832830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.834151 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.839765 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.843916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"07ce522d-6acb-4c52-aa4a-5997916ce345","Type":"ContainerStarted","Data":"f98e3263b1e2ce52b91679db372111f950fc9e7eec32873b9368a5d063ad4b18"} Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.843763 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qlqfk" podStartSLOduration=2.350212389 podStartE2EDuration="6.843745786s" podCreationTimestamp="2026-02-16 21:56:56 +0000 UTC" firstStartedPulling="2026-02-16 21:56:57.250582573 +0000 UTC m=+1149.903861464" lastFinishedPulling="2026-02-16 21:57:01.74411597 +0000 UTC m=+1154.397394861" observedRunningTime="2026-02-16 21:57:02.842917933 +0000 UTC m=+1155.496196824" watchObservedRunningTime="2026-02-16 21:57:02.843745786 +0000 UTC m=+1155.497024677" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.869234 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx6lw\" (UniqueName: \"kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw\") pod \"glance-6372-account-create-update-tsd5b\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.871860 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b292m\" (UniqueName: \"kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m\") pod \"glance-db-create-tzwjt\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.873126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4"} Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.880306 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" event={"ID":"a1180cfb-8f6c-48eb-baec-1915b5ba377b","Type":"ContainerDied","Data":"2999c742e5023721971de0cc40c26400b6d1a8b61406d613354ed625fa8c8c1e"} Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.880354 4792 scope.go:117] "RemoveContainer" containerID="233ebfc702c2f3e288418d049b67403077dbe40386e4f66e071d3c60be08f77a" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.882842 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-g4799" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.888935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" event={"ID":"2df2814e-70ee-40f3-9efe-4d7cfe16bd38","Type":"ContainerStarted","Data":"0754b3b9a09df825a1968a86d8b1158f158b8068dadb70cd0544749240c858d0"} Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.889157 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pv9jk" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.890174 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.932565 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371993.922235 podStartE2EDuration="42.93254069s" podCreationTimestamp="2026-02-16 21:56:20 +0000 UTC" firstStartedPulling="2026-02-16 21:56:31.009436924 +0000 UTC m=+1123.662715855" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:02.890687661 +0000 UTC m=+1155.543966552" watchObservedRunningTime="2026-02-16 21:57:02.93254069 +0000 UTC m=+1155.585819581" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.934584 4792 scope.go:117] "RemoveContainer" containerID="962a728682e3e27c72b7f2212a36a602c6a1b375b009c2fff603afe3d02ff876" Feb 16 21:57:02 crc kubenswrapper[4792]: I0216 21:57:02.957635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.033939 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.086831 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pv9jk"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.095981 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.145081 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-g4799"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.251546 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jl449"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.258585 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.265458 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jl449"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.359448 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5331-account-create-update-qsq8t"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.373515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.377708 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.423347 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5331-account-create-update-qsq8t"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.456141 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.456206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngnk\" (UniqueName: \"kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.550756 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qjr26"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.552414 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.571674 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qjr26"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.571737 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-92e9-account-create-update-g97nz"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.573275 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkqp\" (UniqueName: \"kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngnk\" (UniqueName: \"kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575389 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p282s\" (UniqueName: \"kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.575559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksp9g\" (UniqueName: \"kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.576369 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.577869 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.603023 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-92e9-account-create-update-g97nz"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.627789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngnk\" (UniqueName: \"kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk\") pod \"keystone-db-create-jl449\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.640068 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jl449" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.680407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksp9g\" (UniqueName: \"kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.681887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.681917 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkqp\" (UniqueName: \"kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.681970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.681988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.682010 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.682049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p282s\" (UniqueName: \"kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: E0216 21:57:03.682359 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:57:03 crc kubenswrapper[4792]: E0216 21:57:03.682375 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:57:03 crc kubenswrapper[4792]: E0216 21:57:03.682410 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift podName:e2ada762-95ad-4810-b5da-b4ca59652a45 nodeName:}" failed. No retries permitted until 2026-02-16 21:57:11.682397433 +0000 UTC m=+1164.335676324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift") pod "swift-storage-0" (UID: "e2ada762-95ad-4810-b5da-b4ca59652a45") : configmap "swift-ring-files" not found Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.683474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.684374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.686922 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.701571 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6372-account-create-update-tsd5b"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.708129 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p282s\" (UniqueName: \"kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s\") pod \"placement-db-create-qjr26\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.710666 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkqp\" (UniqueName: \"kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp\") pod \"placement-92e9-account-create-update-g97nz\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.729997 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksp9g\" (UniqueName: \"kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g\") pod \"keystone-5331-account-create-update-qsq8t\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:03 crc kubenswrapper[4792]: W0216 21:57:03.730383 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf67a67b7_bc6b_438b_8802_a81b934c2135.slice/crio-a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db WatchSource:0}: Error finding container a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db: Status 404 returned error can't find the container with id a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.884538 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qjr26" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.923158 4792 generic.go:334] "Generic (PLEG): container finished" podID="496fb889-544d-45cf-883e-8523323a8c04" containerID="c3320115f17e921272a568b114f5b2b43c97fa0d9e1f79b3acdb3aa9e10e7dd0" exitCode=0 Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.923251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" event={"ID":"496fb889-544d-45cf-883e-8523323a8c04","Type":"ContainerDied","Data":"c3320115f17e921272a568b114f5b2b43c97fa0d9e1f79b3acdb3aa9e10e7dd0"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.923277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" event={"ID":"496fb889-544d-45cf-883e-8523323a8c04","Type":"ContainerStarted","Data":"43b1f23077cee20b73591a01720e9fbb1b279fcec0c8d96e51281003d75f9a99"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.925961 4792 generic.go:334] "Generic (PLEG): container finished" podID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerID="6f50e69d981c64890ffe2307a59b5a9917bec7db8f9b894772a3feff3f57cfc1" exitCode=0 Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.926010 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" event={"ID":"2df2814e-70ee-40f3-9efe-4d7cfe16bd38","Type":"ContainerDied","Data":"6f50e69d981c64890ffe2307a59b5a9917bec7db8f9b894772a3feff3f57cfc1"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.928344 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tzwjt"] Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.939428 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rzhpq" event={"ID":"76a50771-3519-451e-af83-32d1da662062","Type":"ContainerStarted","Data":"2fb3f4594fa768b4373e42c352a2b9737c54cfe0abe76468dba2e30dd65a6de9"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.939476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rzhpq" event={"ID":"76a50771-3519-451e-af83-32d1da662062","Type":"ContainerStarted","Data":"d5df2136b718322e017680173c6027e85b1700539d89ecfc54eac73fdcc55758"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.939903 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.945182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6af85927-1a78-41d9-8d3d-cfef6f7f9d20","Type":"ContainerStarted","Data":"7dd341bbd69976dfccd7ef824ec7e9d28fca5e38a9892d515a4bc7446daca758"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.958537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6372-account-create-update-tsd5b" event={"ID":"f67a67b7-bc6b-438b-8802-a81b934c2135","Type":"ContainerStarted","Data":"a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db"} Feb 16 21:57:03 crc kubenswrapper[4792]: I0216 21:57:03.969002 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rzhpq" podStartSLOduration=4.9689597469999995 podStartE2EDuration="4.968959747s" podCreationTimestamp="2026-02-16 21:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:03.960040707 +0000 UTC m=+1156.613319598" watchObservedRunningTime="2026-02-16 21:57:03.968959747 +0000 UTC m=+1156.622238638" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.026798 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.046697 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1180cfb-8f6c-48eb-baec-1915b5ba377b" path="/var/lib/kubelet/pods/a1180cfb-8f6c-48eb-baec-1915b5ba377b/volumes" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.047414 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99c9de0-8ff3-480c-a57c-85cbc7cfb680" path="/var/lib/kubelet/pods/c99c9de0-8ff3-480c-a57c-85cbc7cfb680/volumes" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.198394 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jl449"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.606115 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qjr26"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.689905 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-r9f8h"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.691522 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.727787 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-r9f8h"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.795872 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-92e9-account-create-update-g97nz"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.850539 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.850633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rm4z\" (UniqueName: \"kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.870110 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d07e-account-create-update-x8jwp"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.875742 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.879129 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.893638 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5331-account-create-update-qsq8t"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.914662 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d07e-account-create-update-x8jwp"] Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.955525 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.955589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rm4z\" (UniqueName: \"kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.957272 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.992247 4792 generic.go:334] "Generic (PLEG): container finished" podID="f67a67b7-bc6b-438b-8802-a81b934c2135" containerID="ee1cd0327853fae844b7c25f8e66e22210410a0970726941b3a0ae69286447a5" exitCode=0 Feb 16 21:57:04 crc kubenswrapper[4792]: I0216 21:57:04.992320 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6372-account-create-update-tsd5b" event={"ID":"f67a67b7-bc6b-438b-8802-a81b934c2135","Type":"ContainerDied","Data":"ee1cd0327853fae844b7c25f8e66e22210410a0970726941b3a0ae69286447a5"} Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.001161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tzwjt" event={"ID":"9607ed45-f58d-4edc-8f15-069b36ce8ce1","Type":"ContainerStarted","Data":"b663d0075946f0a904abe40156d2586468fb1f0a28da1e515cf4fa2d18416f48"} Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.001204 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tzwjt" event={"ID":"9607ed45-f58d-4edc-8f15-069b36ce8ce1","Type":"ContainerStarted","Data":"1be411577e278c954af1b616b809d8d54919fbeda073d71c77b69db2c11769a7"} Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.031549 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jl449" event={"ID":"eee1fc47-fd26-4e80-9640-960ee64b5839","Type":"ContainerStarted","Data":"979b44f482d7936ca6bda3892e5ac9899f119b28aebbd35d17cd06d7604f29bb"} Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.052640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qjr26" event={"ID":"baa06059-0788-46d7-b688-68141d71b288","Type":"ContainerStarted","Data":"2ba99196f7402ca6c6ed6f08fe9a1aa05fc22016cdd9f219c9c76df48c89b67d"} Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.057836 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg6f\" (UniqueName: \"kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.069041 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.058963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rm4z\" (UniqueName: \"kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z\") pod \"mysqld-exporter-openstack-db-create-r9f8h\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.099526 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-tzwjt" podStartSLOduration=3.099507759 podStartE2EDuration="3.099507759s" podCreationTimestamp="2026-02-16 21:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:05.073632858 +0000 UTC m=+1157.726911769" watchObservedRunningTime="2026-02-16 21:57:05.099507759 +0000 UTC m=+1157.752786650" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.120915 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jl449" podStartSLOduration=2.120891811 podStartE2EDuration="2.120891811s" podCreationTimestamp="2026-02-16 21:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:05.090481978 +0000 UTC m=+1157.743760869" watchObservedRunningTime="2026-02-16 21:57:05.120891811 +0000 UTC m=+1157.774170702" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.171661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgg6f\" (UniqueName: \"kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.171748 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.172499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.191818 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgg6f\" (UniqueName: \"kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f\") pod \"mysqld-exporter-d07e-account-create-update-x8jwp\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.244997 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:05 crc kubenswrapper[4792]: I0216 21:57:05.250139 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:05 crc kubenswrapper[4792]: W0216 21:57:05.546022 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade7459b_8627_4e5e_a075_e86a88b9eaf0.slice/crio-877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42 WatchSource:0}: Error finding container 877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42: Status 404 returned error can't find the container with id 877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42 Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.069135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" event={"ID":"2df2814e-70ee-40f3-9efe-4d7cfe16bd38","Type":"ContainerStarted","Data":"29d0cbe4aa297ca43eb6c9e7c7a2320129194b4520513b5b44bef2167689fabe"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.069559 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.073003 4792 generic.go:334] "Generic (PLEG): container finished" podID="eee1fc47-fd26-4e80-9640-960ee64b5839" containerID="6dc269f7ab8c1f41f052d48b7b6a698a2abf8c93e5b024d26b1c43bdb7daff34" exitCode=0 Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.073058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jl449" event={"ID":"eee1fc47-fd26-4e80-9640-960ee64b5839","Type":"ContainerDied","Data":"6dc269f7ab8c1f41f052d48b7b6a698a2abf8c93e5b024d26b1c43bdb7daff34"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.082747 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qjr26" event={"ID":"baa06059-0788-46d7-b688-68141d71b288","Type":"ContainerStarted","Data":"a786f13858cd51ec5e6296f07241ef244e5cc436745f9c0aabe763c8b9e933d7"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.098849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6af85927-1a78-41d9-8d3d-cfef6f7f9d20","Type":"ContainerStarted","Data":"8eeba0a24ff4cc48b9857c0a608efeeb2f9ada3a7effe280baa51412675a1efd"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.098948 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" podStartSLOduration=7.098924056 podStartE2EDuration="7.098924056s" podCreationTimestamp="2026-02-16 21:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:06.085406514 +0000 UTC m=+1158.738685405" watchObservedRunningTime="2026-02-16 21:57:06.098924056 +0000 UTC m=+1158.752202947" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.117171 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qjr26" podStartSLOduration=3.117153503 podStartE2EDuration="3.117153503s" podCreationTimestamp="2026-02-16 21:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:06.102319046 +0000 UTC m=+1158.755597937" watchObservedRunningTime="2026-02-16 21:57:06.117153503 +0000 UTC m=+1158.770432394" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.117550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5331-account-create-update-qsq8t" event={"ID":"49265dfe-072f-483c-a891-510f3b17498c","Type":"ContainerStarted","Data":"510fe6772647da03c5a6805a5d078b9c24b97851379fb669d2c16bdb94cc9938"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.117638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5331-account-create-update-qsq8t" event={"ID":"49265dfe-072f-483c-a891-510f3b17498c","Type":"ContainerStarted","Data":"e851eb3dd80b65d727e4245715cbaf2e58d9233a04b765528480a17a4514710a"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.121095 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" event={"ID":"496fb889-544d-45cf-883e-8523323a8c04","Type":"ContainerStarted","Data":"c792935c695c8381f7355a41fb71d19f8610ba4382ce3a89f2948674bce51b3a"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.121911 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.129568 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-92e9-account-create-update-g97nz" event={"ID":"ade7459b-8627-4e5e-a075-e86a88b9eaf0","Type":"ContainerStarted","Data":"95b8bcd10b57a60cb16862e81771772c1735b8ce8d190126ff999cc7e692ac85"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.129651 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-92e9-account-create-update-g97nz" event={"ID":"ade7459b-8627-4e5e-a075-e86a88b9eaf0","Type":"ContainerStarted","Data":"877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.134540 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d07e-account-create-update-x8jwp"] Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.134938 4792 generic.go:334] "Generic (PLEG): container finished" podID="9607ed45-f58d-4edc-8f15-069b36ce8ce1" containerID="b663d0075946f0a904abe40156d2586468fb1f0a28da1e515cf4fa2d18416f48" exitCode=0 Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.135133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tzwjt" event={"ID":"9607ed45-f58d-4edc-8f15-069b36ce8ce1","Type":"ContainerDied","Data":"b663d0075946f0a904abe40156d2586468fb1f0a28da1e515cf4fa2d18416f48"} Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.181167 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" podStartSLOduration=8.181149555 podStartE2EDuration="8.181149555s" podCreationTimestamp="2026-02-16 21:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:06.143926339 +0000 UTC m=+1158.797205230" watchObservedRunningTime="2026-02-16 21:57:06.181149555 +0000 UTC m=+1158.834428446" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.203473 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5331-account-create-update-qsq8t" podStartSLOduration=3.20344845 podStartE2EDuration="3.20344845s" podCreationTimestamp="2026-02-16 21:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:06.172058661 +0000 UTC m=+1158.825337562" watchObservedRunningTime="2026-02-16 21:57:06.20344845 +0000 UTC m=+1158.856727341" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.218531 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-92e9-account-create-update-g97nz" podStartSLOduration=3.218510963 podStartE2EDuration="3.218510963s" podCreationTimestamp="2026-02-16 21:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:06.189048166 +0000 UTC m=+1158.842327057" watchObservedRunningTime="2026-02-16 21:57:06.218510963 +0000 UTC m=+1158.871789854" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.249650 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-r9f8h"] Feb 16 21:57:06 crc kubenswrapper[4792]: W0216 21:57:06.264054 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29d90353_5fb7_4eca_878f_fe0ce1e0a5a4.slice/crio-f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb WatchSource:0}: Error finding container f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb: Status 404 returned error can't find the container with id f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.537385 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.629457 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts\") pod \"f67a67b7-bc6b-438b-8802-a81b934c2135\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.629647 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx6lw\" (UniqueName: \"kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw\") pod \"f67a67b7-bc6b-438b-8802-a81b934c2135\" (UID: \"f67a67b7-bc6b-438b-8802-a81b934c2135\") " Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.629960 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f67a67b7-bc6b-438b-8802-a81b934c2135" (UID: "f67a67b7-bc6b-438b-8802-a81b934c2135"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.630288 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f67a67b7-bc6b-438b-8802-a81b934c2135-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.635286 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw" (OuterVolumeSpecName: "kube-api-access-zx6lw") pod "f67a67b7-bc6b-438b-8802-a81b934c2135" (UID: "f67a67b7-bc6b-438b-8802-a81b934c2135"). InnerVolumeSpecName "kube-api-access-zx6lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:06 crc kubenswrapper[4792]: I0216 21:57:06.732295 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx6lw\" (UniqueName: \"kubernetes.io/projected/f67a67b7-bc6b-438b-8802-a81b934c2135-kube-api-access-zx6lw\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:06 crc kubenswrapper[4792]: E0216 21:57:06.834486 4792 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.200:52210->38.102.83.200:37131: read tcp 38.102.83.200:52210->38.102.83.200:37131: read: connection reset by peer Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.152397 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6af85927-1a78-41d9-8d3d-cfef6f7f9d20","Type":"ContainerStarted","Data":"c810ec34056872eaf3d29aab04638a2f70415dd6c4558ee07bd25ae34a4fbc93"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.152918 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.155509 4792 generic.go:334] "Generic (PLEG): container finished" podID="49265dfe-072f-483c-a891-510f3b17498c" containerID="510fe6772647da03c5a6805a5d078b9c24b97851379fb669d2c16bdb94cc9938" exitCode=0 Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.155686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5331-account-create-update-qsq8t" event={"ID":"49265dfe-072f-483c-a891-510f3b17498c","Type":"ContainerDied","Data":"510fe6772647da03c5a6805a5d078b9c24b97851379fb669d2c16bdb94cc9938"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.157444 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" containerID="bfd28e1726d7ef61ed7cd5f2f6e68412f52a638a0116f8943801f6cdefaa71d8" exitCode=0 Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.157496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" event={"ID":"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e","Type":"ContainerDied","Data":"bfd28e1726d7ef61ed7cd5f2f6e68412f52a638a0116f8943801f6cdefaa71d8"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.157515 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" event={"ID":"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e","Type":"ContainerStarted","Data":"1fdb8323f549ec18841d5ce273d9c37af347d9e73f99810900b1f0ddf3632325"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.159082 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6372-account-create-update-tsd5b" event={"ID":"f67a67b7-bc6b-438b-8802-a81b934c2135","Type":"ContainerDied","Data":"a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.159103 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9553ea028a1774fa5a00ce583e44718127550e561ba93e3ecd4214a2a5bc1db" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.159153 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6372-account-create-update-tsd5b" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.167333 4792 generic.go:334] "Generic (PLEG): container finished" podID="ade7459b-8627-4e5e-a075-e86a88b9eaf0" containerID="95b8bcd10b57a60cb16862e81771772c1735b8ce8d190126ff999cc7e692ac85" exitCode=0 Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.167485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-92e9-account-create-update-g97nz" event={"ID":"ade7459b-8627-4e5e-a075-e86a88b9eaf0","Type":"ContainerDied","Data":"95b8bcd10b57a60cb16862e81771772c1735b8ce8d190126ff999cc7e692ac85"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.168716 4792 generic.go:334] "Generic (PLEG): container finished" podID="29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" containerID="85a4c3d670fbf7833eb37099b9949151e0492b4cee3e4ab8b6c083612cae3570" exitCode=0 Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.169017 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" event={"ID":"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4","Type":"ContainerDied","Data":"85a4c3d670fbf7833eb37099b9949151e0492b4cee3e4ab8b6c083612cae3570"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.169167 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" event={"ID":"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4","Type":"ContainerStarted","Data":"f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.178271 4792 generic.go:334] "Generic (PLEG): container finished" podID="baa06059-0788-46d7-b688-68141d71b288" containerID="a786f13858cd51ec5e6296f07241ef244e5cc436745f9c0aabe763c8b9e933d7" exitCode=0 Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.178585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qjr26" event={"ID":"baa06059-0788-46d7-b688-68141d71b288","Type":"ContainerDied","Data":"a786f13858cd51ec5e6296f07241ef244e5cc436745f9c0aabe763c8b9e933d7"} Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.187029 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.352471882 podStartE2EDuration="8.187012133s" podCreationTimestamp="2026-02-16 21:56:59 +0000 UTC" firstStartedPulling="2026-02-16 21:57:02.803023897 +0000 UTC m=+1155.456302788" lastFinishedPulling="2026-02-16 21:57:05.637564148 +0000 UTC m=+1158.290843039" observedRunningTime="2026-02-16 21:57:07.177051437 +0000 UTC m=+1159.830330348" watchObservedRunningTime="2026-02-16 21:57:07.187012133 +0000 UTC m=+1159.840291024" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.624758 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jl449" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.665799 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts\") pod \"eee1fc47-fd26-4e80-9640-960ee64b5839\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.665845 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tngnk\" (UniqueName: \"kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk\") pod \"eee1fc47-fd26-4e80-9640-960ee64b5839\" (UID: \"eee1fc47-fd26-4e80-9640-960ee64b5839\") " Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.667377 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eee1fc47-fd26-4e80-9640-960ee64b5839" (UID: "eee1fc47-fd26-4e80-9640-960ee64b5839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.672451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk" (OuterVolumeSpecName: "kube-api-access-tngnk") pod "eee1fc47-fd26-4e80-9640-960ee64b5839" (UID: "eee1fc47-fd26-4e80-9640-960ee64b5839"). InnerVolumeSpecName "kube-api-access-tngnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.767962 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee1fc47-fd26-4e80-9640-960ee64b5839-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.767995 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tngnk\" (UniqueName: \"kubernetes.io/projected/eee1fc47-fd26-4e80-9640-960ee64b5839-kube-api-access-tngnk\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.768178 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.869482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b292m\" (UniqueName: \"kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m\") pod \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.869749 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts\") pod \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\" (UID: \"9607ed45-f58d-4edc-8f15-069b36ce8ce1\") " Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.870222 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9607ed45-f58d-4edc-8f15-069b36ce8ce1" (UID: "9607ed45-f58d-4edc-8f15-069b36ce8ce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.870953 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9607ed45-f58d-4edc-8f15-069b36ce8ce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.872282 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m" (OuterVolumeSpecName: "kube-api-access-b292m") pod "9607ed45-f58d-4edc-8f15-069b36ce8ce1" (UID: "9607ed45-f58d-4edc-8f15-069b36ce8ce1"). InnerVolumeSpecName "kube-api-access-b292m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:07 crc kubenswrapper[4792]: I0216 21:57:07.974230 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b292m\" (UniqueName: \"kubernetes.io/projected/9607ed45-f58d-4edc-8f15-069b36ce8ce1-kube-api-access-b292m\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.189182 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tzwjt" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.189197 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tzwjt" event={"ID":"9607ed45-f58d-4edc-8f15-069b36ce8ce1","Type":"ContainerDied","Data":"1be411577e278c954af1b616b809d8d54919fbeda073d71c77b69db2c11769a7"} Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.189243 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be411577e278c954af1b616b809d8d54919fbeda073d71c77b69db2c11769a7" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.191064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jl449" event={"ID":"eee1fc47-fd26-4e80-9640-960ee64b5839","Type":"ContainerDied","Data":"979b44f482d7936ca6bda3892e5ac9899f119b28aebbd35d17cd06d7604f29bb"} Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.191123 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979b44f482d7936ca6bda3892e5ac9899f119b28aebbd35d17cd06d7604f29bb" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.191321 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jl449" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.685238 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qjr26" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.793125 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p282s\" (UniqueName: \"kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s\") pod \"baa06059-0788-46d7-b688-68141d71b288\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.793668 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts\") pod \"baa06059-0788-46d7-b688-68141d71b288\" (UID: \"baa06059-0788-46d7-b688-68141d71b288\") " Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.795267 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baa06059-0788-46d7-b688-68141d71b288" (UID: "baa06059-0788-46d7-b688-68141d71b288"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.799905 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s" (OuterVolumeSpecName: "kube-api-access-p282s") pod "baa06059-0788-46d7-b688-68141d71b288" (UID: "baa06059-0788-46d7-b688-68141d71b288"). InnerVolumeSpecName "kube-api-access-p282s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.808361 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p282s\" (UniqueName: \"kubernetes.io/projected/baa06059-0788-46d7-b688-68141d71b288-kube-api-access-p282s\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:08 crc kubenswrapper[4792]: I0216 21:57:08.808549 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa06059-0788-46d7-b688-68141d71b288-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.067610 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.074511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.083668 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.099329 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.144938 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksp9g\" (UniqueName: \"kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g\") pod \"49265dfe-072f-483c-a891-510f3b17498c\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.145201 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts\") pod \"49265dfe-072f-483c-a891-510f3b17498c\" (UID: \"49265dfe-072f-483c-a891-510f3b17498c\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.146275 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49265dfe-072f-483c-a891-510f3b17498c" (UID: "49265dfe-072f-483c-a891-510f3b17498c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.150404 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g" (OuterVolumeSpecName: "kube-api-access-ksp9g") pod "49265dfe-072f-483c-a891-510f3b17498c" (UID: "49265dfe-072f-483c-a891-510f3b17498c"). InnerVolumeSpecName "kube-api-access-ksp9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.208103 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qjr26" event={"ID":"baa06059-0788-46d7-b688-68141d71b288","Type":"ContainerDied","Data":"2ba99196f7402ca6c6ed6f08fe9a1aa05fc22016cdd9f219c9c76df48c89b67d"} Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.208163 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba99196f7402ca6c6ed6f08fe9a1aa05fc22016cdd9f219c9c76df48c89b67d" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.208252 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qjr26" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.215919 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5331-account-create-update-qsq8t" event={"ID":"49265dfe-072f-483c-a891-510f3b17498c","Type":"ContainerDied","Data":"e851eb3dd80b65d727e4245715cbaf2e58d9233a04b765528480a17a4514710a"} Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.215964 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e851eb3dd80b65d727e4245715cbaf2e58d9233a04b765528480a17a4514710a" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.216034 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5331-account-create-update-qsq8t" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.232618 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.232611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d07e-account-create-update-x8jwp" event={"ID":"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e","Type":"ContainerDied","Data":"1fdb8323f549ec18841d5ce273d9c37af347d9e73f99810900b1f0ddf3632325"} Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.232742 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fdb8323f549ec18841d5ce273d9c37af347d9e73f99810900b1f0ddf3632325" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.237449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-92e9-account-create-update-g97nz" event={"ID":"ade7459b-8627-4e5e-a075-e86a88b9eaf0","Type":"ContainerDied","Data":"877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42"} Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.237485 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877e60e793f02c4d5afd2b72c737551758c5454808de2d38f459ae1a909eda42" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.237583 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-92e9-account-create-update-g97nz" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.244437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" event={"ID":"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4","Type":"ContainerDied","Data":"f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb"} Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.244475 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f29d2b812c70501f20022bcb1c4662deb3f6acbb98a6a2450048a8544ebe4fdb" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.244547 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-r9f8h" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247337 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts\") pod \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247403 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rm4z\" (UniqueName: \"kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z\") pod \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247428 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts\") pod \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\" (UID: \"29d90353-5fb7-4eca-878f-fe0ce1e0a5a4\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgkqp\" (UniqueName: \"kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp\") pod \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247477 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgg6f\" (UniqueName: \"kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f\") pod \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\" (UID: \"2ede8625-b8a4-4d49-abc2-9c4fb8edab4e\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.247551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts\") pod \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\" (UID: \"ade7459b-8627-4e5e-a075-e86a88b9eaf0\") " Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.248147 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" (UID: "2ede8625-b8a4-4d49-abc2-9c4fb8edab4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.248825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ade7459b-8627-4e5e-a075-e86a88b9eaf0" (UID: "ade7459b-8627-4e5e-a075-e86a88b9eaf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.249244 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49265dfe-072f-483c-a891-510f3b17498c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.249279 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.249295 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksp9g\" (UniqueName: \"kubernetes.io/projected/49265dfe-072f-483c-a891-510f3b17498c-kube-api-access-ksp9g\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.249308 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade7459b-8627-4e5e-a075-e86a88b9eaf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.251309 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp" (OuterVolumeSpecName: "kube-api-access-lgkqp") pod "ade7459b-8627-4e5e-a075-e86a88b9eaf0" (UID: "ade7459b-8627-4e5e-a075-e86a88b9eaf0"). InnerVolumeSpecName "kube-api-access-lgkqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.251900 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" (UID: "29d90353-5fb7-4eca-878f-fe0ce1e0a5a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.253525 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f" (OuterVolumeSpecName: "kube-api-access-fgg6f") pod "2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" (UID: "2ede8625-b8a4-4d49-abc2-9c4fb8edab4e"). InnerVolumeSpecName "kube-api-access-fgg6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.253937 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z" (OuterVolumeSpecName: "kube-api-access-4rm4z") pod "29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" (UID: "29d90353-5fb7-4eca-878f-fe0ce1e0a5a4"). InnerVolumeSpecName "kube-api-access-4rm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.352536 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rm4z\" (UniqueName: \"kubernetes.io/projected/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-kube-api-access-4rm4z\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.352578 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.352588 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgkqp\" (UniqueName: \"kubernetes.io/projected/ade7459b-8627-4e5e-a075-e86a88b9eaf0-kube-api-access-lgkqp\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.352636 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgg6f\" (UniqueName: \"kubernetes.io/projected/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e-kube-api-access-fgg6f\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.397564 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.463586 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qfl5b"] Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464025 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade7459b-8627-4e5e-a075-e86a88b9eaf0" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464041 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade7459b-8627-4e5e-a075-e86a88b9eaf0" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464056 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464069 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464080 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67a67b7-bc6b-438b-8802-a81b934c2135" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464086 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67a67b7-bc6b-438b-8802-a81b934c2135" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464093 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464098 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464107 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49265dfe-072f-483c-a891-510f3b17498c" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464112 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="49265dfe-072f-483c-a891-510f3b17498c" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464126 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa06059-0788-46d7-b688-68141d71b288" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464131 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa06059-0788-46d7-b688-68141d71b288" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464149 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9607ed45-f58d-4edc-8f15-069b36ce8ce1" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464155 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9607ed45-f58d-4edc-8f15-069b36ce8ce1" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: E0216 21:57:09.464170 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee1fc47-fd26-4e80-9640-960ee64b5839" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464176 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee1fc47-fd26-4e80-9640-960ee64b5839" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464339 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade7459b-8627-4e5e-a075-e86a88b9eaf0" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464347 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464360 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa06059-0788-46d7-b688-68141d71b288" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464376 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464386 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67a67b7-bc6b-438b-8802-a81b934c2135" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464396 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9607ed45-f58d-4edc-8f15-069b36ce8ce1" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464407 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="49265dfe-072f-483c-a891-510f3b17498c" containerName="mariadb-account-create-update" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.464419 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee1fc47-fd26-4e80-9640-960ee64b5839" containerName="mariadb-database-create" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.465077 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.470589 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.473496 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qfl5b"] Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.556633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvtq\" (UniqueName: \"kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.558627 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.661139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.661254 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvtq\" (UniqueName: \"kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.662105 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.678545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvtq\" (UniqueName: \"kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq\") pod \"root-account-create-update-qfl5b\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:09 crc kubenswrapper[4792]: I0216 21:57:09.787517 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:11 crc kubenswrapper[4792]: I0216 21:57:11.284049 4792 generic.go:334] "Generic (PLEG): container finished" podID="bebd5c80-d002-49e6-ac52-d1d323b83801" containerID="24a8af67cd13e5538efc6f90d1698af657d195c007bb74ec286fc3106eb4d661" exitCode=0 Feb 16 21:57:11 crc kubenswrapper[4792]: I0216 21:57:11.284470 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qlqfk" event={"ID":"bebd5c80-d002-49e6-ac52-d1d323b83801","Type":"ContainerDied","Data":"24a8af67cd13e5538efc6f90d1698af657d195c007bb74ec286fc3106eb4d661"} Feb 16 21:57:11 crc kubenswrapper[4792]: I0216 21:57:11.716864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:57:11 crc kubenswrapper[4792]: I0216 21:57:11.725052 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e2ada762-95ad-4810-b5da-b4ca59652a45-etc-swift\") pod \"swift-storage-0\" (UID: \"e2ada762-95ad-4810-b5da-b4ca59652a45\") " pod="openstack/swift-storage-0" Feb 16 21:57:11 crc kubenswrapper[4792]: I0216 21:57:11.731745 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.197619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.198066 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.289306 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.385679 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.759973 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8982l"] Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.761770 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.763810 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bcfqq" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.766132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.770061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8982l"] Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.845033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.845120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqk6b\" (UniqueName: \"kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.845261 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.845339 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.947208 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.947255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqk6b\" (UniqueName: \"kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.947305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.947334 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.952747 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.953538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.954416 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:12 crc kubenswrapper[4792]: I0216 21:57:12.968665 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqk6b\" (UniqueName: \"kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b\") pod \"glance-db-sync-8982l\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " pod="openstack/glance-db-sync-8982l" Feb 16 21:57:13 crc kubenswrapper[4792]: I0216 21:57:13.085588 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8982l" Feb 16 21:57:14 crc kubenswrapper[4792]: I0216 21:57:14.819273 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:57:14 crc kubenswrapper[4792]: I0216 21:57:14.923279 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:57:14 crc kubenswrapper[4792]: I0216 21:57:14.925039 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="dnsmasq-dns" containerID="cri-o://c792935c695c8381f7355a41fb71d19f8610ba4382ce3a89f2948674bce51b3a" gracePeriod=10 Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.072856 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-scwfx"] Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.074800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.081256 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-scwfx"] Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.192765 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.192970 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4m5\" (UniqueName: \"kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.278526 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b08c-account-create-update-2gzj8"] Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.279825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.281716 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.294734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.294974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4m5\" (UniqueName: \"kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.295866 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b08c-account-create-update-2gzj8"] Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.295918 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.320999 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4m5\" (UniqueName: \"kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5\") pod \"mysqld-exporter-openstack-cell1-db-create-scwfx\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.396690 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.397239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.397530 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvb24\" (UniqueName: \"kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.500278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.500365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvb24\" (UniqueName: \"kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.501355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.520487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvb24\" (UniqueName: \"kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24\") pod \"mysqld-exporter-b08c-account-create-update-2gzj8\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:15 crc kubenswrapper[4792]: I0216 21:57:15.607998 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.343728 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qlqfk" event={"ID":"bebd5c80-d002-49e6-ac52-d1d323b83801","Type":"ContainerDied","Data":"e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed"} Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.344036 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e96d8aa878080b6ac55d8e85ed5681c404c980a1950e29435698bc07c42aed" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.345981 4792 generic.go:334] "Generic (PLEG): container finished" podID="496fb889-544d-45cf-883e-8523323a8c04" containerID="c792935c695c8381f7355a41fb71d19f8610ba4382ce3a89f2948674bce51b3a" exitCode=0 Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.346008 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" event={"ID":"496fb889-544d-45cf-883e-8523323a8c04","Type":"ContainerDied","Data":"c792935c695c8381f7355a41fb71d19f8610ba4382ce3a89f2948674bce51b3a"} Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.424758 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521387 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521443 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521677 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521723 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521840 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9pc\" (UniqueName: \"kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.521890 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts\") pod \"bebd5c80-d002-49e6-ac52-d1d323b83801\" (UID: \"bebd5c80-d002-49e6-ac52-d1d323b83801\") " Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.524151 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.525382 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.528127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc" (OuterVolumeSpecName: "kube-api-access-zf9pc") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "kube-api-access-zf9pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.554789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.556999 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.572958 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts" (OuterVolumeSpecName: "scripts") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.586017 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bebd5c80-d002-49e6-ac52-d1d323b83801" (UID: "bebd5c80-d002-49e6-ac52-d1d323b83801"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625062 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9pc\" (UniqueName: \"kubernetes.io/projected/bebd5c80-d002-49e6-ac52-d1d323b83801-kube-api-access-zf9pc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625096 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625108 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625116 4792 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625126 4792 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bebd5c80-d002-49e6-ac52-d1d323b83801-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625134 4792 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bebd5c80-d002-49e6-ac52-d1d323b83801-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:16 crc kubenswrapper[4792]: I0216 21:57:16.625144 4792 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bebd5c80-d002-49e6-ac52-d1d323b83801-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.186450 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qfl5b"] Feb 16 21:57:17 crc kubenswrapper[4792]: W0216 21:57:17.209852 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod521cf6b2_e2cf_4ae6_a34c_71e15d93916f.slice/crio-50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617 WatchSource:0}: Error finding container 50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617: Status 404 returned error can't find the container with id 50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617 Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.210948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-scwfx"] Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.220254 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b08c-account-create-update-2gzj8"] Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.314363 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:57:17 crc kubenswrapper[4792]: W0216 21:57:17.345212 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2ada762_95ad_4810_b5da_b4ca59652a45.slice/crio-3f9464163d21e04d2750fdad512ecb51f807c44ae7ef3acaa317b42dd48ed853 WatchSource:0}: Error finding container 3f9464163d21e04d2750fdad512ecb51f807c44ae7ef3acaa317b42dd48ed853: Status 404 returned error can't find the container with id 3f9464163d21e04d2750fdad512ecb51f807c44ae7ef3acaa317b42dd48ed853 Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.347648 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.358154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerStarted","Data":"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.365531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" event={"ID":"496fb889-544d-45cf-883e-8523323a8c04","Type":"ContainerDied","Data":"43b1f23077cee20b73591a01720e9fbb1b279fcec0c8d96e51281003d75f9a99"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.365610 4792 scope.go:117] "RemoveContainer" containerID="c792935c695c8381f7355a41fb71d19f8610ba4382ce3a89f2948674bce51b3a" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.365765 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5wwfc" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.382679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" event={"ID":"521cf6b2-e2cf-4ae6-a34c-71e15d93916f","Type":"ContainerStarted","Data":"50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.385631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfl5b" event={"ID":"04a09915-e343-418c-abc3-790831a7e28f","Type":"ContainerStarted","Data":"261c215bc32054401e0801a05536f81a7507558b94c9eb3bf9b0d28b70bc5719"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.386746 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" event={"ID":"b49abd6e-b475-4ad2-a88a-c0dc37ab2997","Type":"ContainerStarted","Data":"3ea3749c6f79fdfac06a688bb5aa5d812a0026642b011d5dc4f0bbe809f9dfeb"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.387742 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qlqfk" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.388276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"3f9464163d21e04d2750fdad512ecb51f807c44ae7ef3acaa317b42dd48ed853"} Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.440108 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8982l"] Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.447081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc\") pod \"496fb889-544d-45cf-883e-8523323a8c04\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.447227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config\") pod \"496fb889-544d-45cf-883e-8523323a8c04\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.447384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb\") pod \"496fb889-544d-45cf-883e-8523323a8c04\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.447422 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64bsj\" (UniqueName: \"kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj\") pod \"496fb889-544d-45cf-883e-8523323a8c04\" (UID: \"496fb889-544d-45cf-883e-8523323a8c04\") " Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.529355 4792 scope.go:117] "RemoveContainer" containerID="c3320115f17e921272a568b114f5b2b43c97fa0d9e1f79b3acdb3aa9e10e7dd0" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.618148 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj" (OuterVolumeSpecName: "kube-api-access-64bsj") pod "496fb889-544d-45cf-883e-8523323a8c04" (UID: "496fb889-544d-45cf-883e-8523323a8c04"). InnerVolumeSpecName "kube-api-access-64bsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.654148 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64bsj\" (UniqueName: \"kubernetes.io/projected/496fb889-544d-45cf-883e-8523323a8c04-kube-api-access-64bsj\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.705299 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config" (OuterVolumeSpecName: "config") pod "496fb889-544d-45cf-883e-8523323a8c04" (UID: "496fb889-544d-45cf-883e-8523323a8c04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.707982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "496fb889-544d-45cf-883e-8523323a8c04" (UID: "496fb889-544d-45cf-883e-8523323a8c04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.710656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "496fb889-544d-45cf-883e-8523323a8c04" (UID: "496fb889-544d-45cf-883e-8523323a8c04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.758040 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.758332 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:17 crc kubenswrapper[4792]: I0216 21:57:17.758346 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496fb889-544d-45cf-883e-8523323a8c04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.165689 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.179274 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5wwfc"] Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.399812 4792 generic.go:334] "Generic (PLEG): container finished" podID="04a09915-e343-418c-abc3-790831a7e28f" containerID="beaf8deeb319f83cd4751297f248c7459f696f1d31f766f3c293aa3fe9ee354b" exitCode=0 Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.399901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfl5b" event={"ID":"04a09915-e343-418c-abc3-790831a7e28f","Type":"ContainerDied","Data":"beaf8deeb319f83cd4751297f248c7459f696f1d31f766f3c293aa3fe9ee354b"} Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.401837 4792 generic.go:334] "Generic (PLEG): container finished" podID="b49abd6e-b475-4ad2-a88a-c0dc37ab2997" containerID="7ae24978e92b225bb03d73c08958d4f53f8f9beb9dfd2bd4874c8732387b2260" exitCode=0 Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.401918 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" event={"ID":"b49abd6e-b475-4ad2-a88a-c0dc37ab2997","Type":"ContainerDied","Data":"7ae24978e92b225bb03d73c08958d4f53f8f9beb9dfd2bd4874c8732387b2260"} Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.404427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8982l" event={"ID":"63303797-e14d-4091-ab14-8be69dd506ad","Type":"ContainerStarted","Data":"c34c0e230a2f539fffe9e8408904859de9b0b95882e429860e78bc9fc8f09898"} Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.405902 4792 generic.go:334] "Generic (PLEG): container finished" podID="521cf6b2-e2cf-4ae6-a34c-71e15d93916f" containerID="68bdb856e434dfda4401fbdb963fdf8c70d9f0e5b81d9115ef0b2c13a64432d6" exitCode=0 Feb 16 21:57:18 crc kubenswrapper[4792]: I0216 21:57:18.405930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" event={"ID":"521cf6b2-e2cf-4ae6-a34c-71e15d93916f","Type":"ContainerDied","Data":"68bdb856e434dfda4401fbdb963fdf8c70d9f0e5b81d9115ef0b2c13a64432d6"} Feb 16 21:57:19 crc kubenswrapper[4792]: I0216 21:57:19.417375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"b4f4076f12c8f9ddf518e398bf8e8d78bb82629524766e42813eaad5ac66223b"} Feb 16 21:57:19 crc kubenswrapper[4792]: I0216 21:57:19.417956 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"733b2bd7efe4878f0cf1317ea1f1d363964a1c13e05214978b172978a27f1484"} Feb 16 21:57:19 crc kubenswrapper[4792]: I0216 21:57:19.939078 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:19 crc kubenswrapper[4792]: I0216 21:57:19.965534 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.009260 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.037479 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.070502 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496fb889-544d-45cf-883e-8523323a8c04" path="/var/lib/kubelet/pods/496fb889-544d-45cf-883e-8523323a8c04/volumes" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.125281 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpvtq\" (UniqueName: \"kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq\") pod \"04a09915-e343-418c-abc3-790831a7e28f\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.125395 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4m5\" (UniqueName: \"kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5\") pod \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.125439 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts\") pod \"04a09915-e343-418c-abc3-790831a7e28f\" (UID: \"04a09915-e343-418c-abc3-790831a7e28f\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.125529 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts\") pod \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\" (UID: \"b49abd6e-b475-4ad2-a88a-c0dc37ab2997\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.126662 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04a09915-e343-418c-abc3-790831a7e28f" (UID: "04a09915-e343-418c-abc3-790831a7e28f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.126878 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b49abd6e-b475-4ad2-a88a-c0dc37ab2997" (UID: "b49abd6e-b475-4ad2-a88a-c0dc37ab2997"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.131026 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq" (OuterVolumeSpecName: "kube-api-access-jpvtq") pod "04a09915-e343-418c-abc3-790831a7e28f" (UID: "04a09915-e343-418c-abc3-790831a7e28f"). InnerVolumeSpecName "kube-api-access-jpvtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.131232 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5" (OuterVolumeSpecName: "kube-api-access-tx4m5") pod "b49abd6e-b475-4ad2-a88a-c0dc37ab2997" (UID: "b49abd6e-b475-4ad2-a88a-c0dc37ab2997"). InnerVolumeSpecName "kube-api-access-tx4m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.230378 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts\") pod \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.230548 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvb24\" (UniqueName: \"kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24\") pod \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\" (UID: \"521cf6b2-e2cf-4ae6-a34c-71e15d93916f\") " Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.230986 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpvtq\" (UniqueName: \"kubernetes.io/projected/04a09915-e343-418c-abc3-790831a7e28f-kube-api-access-jpvtq\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.231004 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4m5\" (UniqueName: \"kubernetes.io/projected/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-kube-api-access-tx4m5\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.231014 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04a09915-e343-418c-abc3-790831a7e28f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.231023 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49abd6e-b475-4ad2-a88a-c0dc37ab2997-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.231091 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "521cf6b2-e2cf-4ae6-a34c-71e15d93916f" (UID: "521cf6b2-e2cf-4ae6-a34c-71e15d93916f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.237860 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24" (OuterVolumeSpecName: "kube-api-access-tvb24") pod "521cf6b2-e2cf-4ae6-a34c-71e15d93916f" (UID: "521cf6b2-e2cf-4ae6-a34c-71e15d93916f"). InnerVolumeSpecName "kube-api-access-tvb24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.337063 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.337106 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvb24\" (UniqueName: \"kubernetes.io/projected/521cf6b2-e2cf-4ae6-a34c-71e15d93916f-kube-api-access-tvb24\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.429455 4792 generic.go:334] "Generic (PLEG): container finished" podID="a04fbeec-860c-4b22-b88d-087872b64e62" containerID="dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2" exitCode=0 Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.429507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerDied","Data":"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.432181 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" event={"ID":"521cf6b2-e2cf-4ae6-a34c-71e15d93916f","Type":"ContainerDied","Data":"50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.432218 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fbcc2eb94d19b018e5df063a9799c6b7804384a776a80048610d366f581617" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.432220 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b08c-account-create-update-2gzj8" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.436309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfl5b" event={"ID":"04a09915-e343-418c-abc3-790831a7e28f","Type":"ContainerDied","Data":"261c215bc32054401e0801a05536f81a7507558b94c9eb3bf9b0d28b70bc5719"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.436382 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261c215bc32054401e0801a05536f81a7507558b94c9eb3bf9b0d28b70bc5719" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.436331 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfl5b" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.439136 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.439140 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-scwfx" event={"ID":"b49abd6e-b475-4ad2-a88a-c0dc37ab2997","Type":"ContainerDied","Data":"3ea3749c6f79fdfac06a688bb5aa5d812a0026642b011d5dc4f0bbe809f9dfeb"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.439175 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea3749c6f79fdfac06a688bb5aa5d812a0026642b011d5dc4f0bbe809f9dfeb" Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.459451 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"3b8ac4c1f4e95920e629e2e3aff2aa9988a5b56e3350a01cee39ce911bccff72"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.459496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"c1f019ca1dc675063ef0d3b4dd2e0bdca49fe4379917ba0fe7f84e85f9759c12"} Feb 16 21:57:20 crc kubenswrapper[4792]: I0216 21:57:20.463657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerStarted","Data":"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff"} Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.474717 4792 generic.go:334] "Generic (PLEG): container finished" podID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerID="5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a" exitCode=0 Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.474886 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerDied","Data":"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a"} Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.478679 4792 generic.go:334] "Generic (PLEG): container finished" podID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerID="1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228" exitCode=0 Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.478736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerDied","Data":"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228"} Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.481672 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerStarted","Data":"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08"} Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.481900 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.551288 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.936109289 podStartE2EDuration="1m4.551267616s" podCreationTimestamp="2026-02-16 21:56:17 +0000 UTC" firstStartedPulling="2026-02-16 21:56:20.763857777 +0000 UTC m=+1113.417136668" lastFinishedPulling="2026-02-16 21:56:45.379016114 +0000 UTC m=+1138.032294995" observedRunningTime="2026-02-16 21:57:21.539094468 +0000 UTC m=+1174.192373359" watchObservedRunningTime="2026-02-16 21:57:21.551267616 +0000 UTC m=+1174.204546507" Feb 16 21:57:21 crc kubenswrapper[4792]: I0216 21:57:21.907400 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b4c75486b-tlvk9" podUID="07c162cb-aadc-4abf-a1f6-3875f813417d" containerName="console" containerID="cri-o://39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0" gracePeriod=15 Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.428476 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b4c75486b-tlvk9_07c162cb-aadc-4abf-a1f6-3875f813417d/console/0.log" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.429273 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496440 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82vz\" (UniqueName: \"kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496717 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496762 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496831 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.496917 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert\") pod \"07c162cb-aadc-4abf-a1f6-3875f813417d\" (UID: \"07c162cb-aadc-4abf-a1f6-3875f813417d\") " Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.500817 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca" (OuterVolumeSpecName: "service-ca") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.501350 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config" (OuterVolumeSpecName: "console-config") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.501541 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.501721 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.506688 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz" (OuterVolumeSpecName: "kube-api-access-l82vz") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "kube-api-access-l82vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.507255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.510535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"a87196748128dc43fc3dcc1e73fcf21228b015f7b8ba12a90e16ea14d207b601"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.510589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"012b988933aaf10cc8ae4a7ae05adb41e998c9bd52a40be622fc1a091994b3be"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.510614 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"cfd08736c31856b7737f26dd8377d3e736792f67187c37fe5d4a6e4d6cd77bbb"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.513193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerStarted","Data":"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.513925 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.517978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "07c162cb-aadc-4abf-a1f6-3875f813417d" (UID: "07c162cb-aadc-4abf-a1f6-3875f813417d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.525262 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerStarted","Data":"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.526176 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.543159 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b4c75486b-tlvk9_07c162cb-aadc-4abf-a1f6-3875f813417d/console/0.log" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.543449 4792 generic.go:334] "Generic (PLEG): container finished" podID="07c162cb-aadc-4abf-a1f6-3875f813417d" containerID="39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0" exitCode=2 Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.544646 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b4c75486b-tlvk9" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.553179 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.998168889 podStartE2EDuration="1m5.553156807s" podCreationTimestamp="2026-02-16 21:56:17 +0000 UTC" firstStartedPulling="2026-02-16 21:56:20.824011395 +0000 UTC m=+1113.477290286" lastFinishedPulling="2026-02-16 21:56:45.378999313 +0000 UTC m=+1138.032278204" observedRunningTime="2026-02-16 21:57:22.5461974 +0000 UTC m=+1175.199476301" watchObservedRunningTime="2026-02-16 21:57:22.553156807 +0000 UTC m=+1175.206435698" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.546736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b4c75486b-tlvk9" event={"ID":"07c162cb-aadc-4abf-a1f6-3875f813417d","Type":"ContainerDied","Data":"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.567752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b4c75486b-tlvk9" event={"ID":"07c162cb-aadc-4abf-a1f6-3875f813417d","Type":"ContainerDied","Data":"e84ecf9032fb57d2f417a8bee87747a0ec66d1a50be3b5ece2dc58f44d664602"} Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.567849 4792 scope.go:117] "RemoveContainer" containerID="39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.595834 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.988999911 podStartE2EDuration="1m4.595816235s" podCreationTimestamp="2026-02-16 21:56:18 +0000 UTC" firstStartedPulling="2026-02-16 21:56:20.773205367 +0000 UTC m=+1113.426484248" lastFinishedPulling="2026-02-16 21:56:45.380021681 +0000 UTC m=+1138.033300572" observedRunningTime="2026-02-16 21:57:22.587125561 +0000 UTC m=+1175.240404452" watchObservedRunningTime="2026-02-16 21:57:22.595816235 +0000 UTC m=+1175.249095126" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599778 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599804 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599814 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82vz\" (UniqueName: \"kubernetes.io/projected/07c162cb-aadc-4abf-a1f6-3875f813417d-kube-api-access-l82vz\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599824 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599833 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599841 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07c162cb-aadc-4abf-a1f6-3875f813417d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.599849 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07c162cb-aadc-4abf-a1f6-3875f813417d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.645642 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.654365 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b4c75486b-tlvk9"] Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.685566 4792 scope.go:117] "RemoveContainer" containerID="39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0" Feb 16 21:57:22 crc kubenswrapper[4792]: E0216 21:57:22.688364 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0\": container with ID starting with 39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0 not found: ID does not exist" containerID="39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0" Feb 16 21:57:22 crc kubenswrapper[4792]: I0216 21:57:22.688409 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0"} err="failed to get container status \"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0\": rpc error: code = NotFound desc = could not find container \"39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0\": container with ID starting with 39dc4c9d3b30c668de4cc9b7f2d9584c09930c44b9996b6aa2121087f40184f0 not found: ID does not exist" Feb 16 21:57:23 crc kubenswrapper[4792]: I0216 21:57:23.231484 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q4gs" podUID="fc8ee070-8557-4708-a58f-7e5899ed206b" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:57:23 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:57:23 crc kubenswrapper[4792]: > Feb 16 21:57:23 crc kubenswrapper[4792]: I0216 21:57:23.311147 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:57:23 crc kubenswrapper[4792]: I0216 21:57:23.565282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"965457d9218d47713e992973348974743efba7d4e078dc44398b9f682b35e9ce"} Feb 16 21:57:24 crc kubenswrapper[4792]: I0216 21:57:24.046238 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c162cb-aadc-4abf-a1f6-3875f813417d" path="/var/lib/kubelet/pods/07c162cb-aadc-4abf-a1f6-3875f813417d/volumes" Feb 16 21:57:25 crc kubenswrapper[4792]: I0216 21:57:25.969475 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qfl5b"] Feb 16 21:57:25 crc kubenswrapper[4792]: I0216 21:57:25.983115 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qfl5b"] Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.050270 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a09915-e343-418c-abc3-790831a7e28f" path="/var/lib/kubelet/pods/04a09915-e343-418c-abc3-790831a7e28f/volumes" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.103672 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104147 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="dnsmasq-dns" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104164 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="dnsmasq-dns" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104176 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49abd6e-b475-4ad2-a88a-c0dc37ab2997" containerName="mariadb-database-create" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104183 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49abd6e-b475-4ad2-a88a-c0dc37ab2997" containerName="mariadb-database-create" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104197 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="init" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104203 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="init" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104214 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c162cb-aadc-4abf-a1f6-3875f813417d" containerName="console" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104220 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c162cb-aadc-4abf-a1f6-3875f813417d" containerName="console" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104233 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a09915-e343-418c-abc3-790831a7e28f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104238 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a09915-e343-418c-abc3-790831a7e28f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104247 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521cf6b2-e2cf-4ae6-a34c-71e15d93916f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104252 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="521cf6b2-e2cf-4ae6-a34c-71e15d93916f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: E0216 21:57:26.104269 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd5c80-d002-49e6-ac52-d1d323b83801" containerName="swift-ring-rebalance" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104276 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd5c80-d002-49e6-ac52-d1d323b83801" containerName="swift-ring-rebalance" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104454 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c162cb-aadc-4abf-a1f6-3875f813417d" containerName="console" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104469 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="496fb889-544d-45cf-883e-8523323a8c04" containerName="dnsmasq-dns" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104480 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="521cf6b2-e2cf-4ae6-a34c-71e15d93916f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104490 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd5c80-d002-49e6-ac52-d1d323b83801" containerName="swift-ring-rebalance" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104501 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a09915-e343-418c-abc3-790831a7e28f" containerName="mariadb-account-create-update" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.104508 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49abd6e-b475-4ad2-a88a-c0dc37ab2997" containerName="mariadb-database-create" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.105242 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.117905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.132698 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.215117 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.215251 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7zp\" (UniqueName: \"kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.215294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.317411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.317589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.318341 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7zp\" (UniqueName: \"kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.331792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.332217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.347452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7zp\" (UniqueName: \"kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp\") pod \"mysqld-exporter-0\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.432490 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.875729 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"60a05919007e27060ffb4fe80b38d4e75ca942d5f3e77f6be4c76aba93e9e8c6"} Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.900358 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerStarted","Data":"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e"} Feb 16 21:57:26 crc kubenswrapper[4792]: I0216 21:57:26.933213 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.602249993 podStartE2EDuration="1m2.933195416s" podCreationTimestamp="2026-02-16 21:56:24 +0000 UTC" firstStartedPulling="2026-02-16 21:56:46.616573919 +0000 UTC m=+1139.269852810" lastFinishedPulling="2026-02-16 21:57:25.947519342 +0000 UTC m=+1178.600798233" observedRunningTime="2026-02-16 21:57:26.933088893 +0000 UTC m=+1179.586367784" watchObservedRunningTime="2026-02-16 21:57:26.933195416 +0000 UTC m=+1179.586474307" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.228335 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q4gs" podUID="fc8ee070-8557-4708-a58f-7e5899ed206b" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:57:28 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:57:28 crc kubenswrapper[4792]: > Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.308381 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cfzsw" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.515107 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q4gs-config-fqc6r"] Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.516739 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.519123 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.538753 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs-config-fqc6r"] Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.663782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.663835 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.663872 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.664281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs5x\" (UniqueName: \"kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.664329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.664453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766504 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.766509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.767004 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.767100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs5x\" (UniqueName: \"kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.767466 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.769126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.788628 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs5x\" (UniqueName: \"kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x\") pod \"ovn-controller-5q4gs-config-fqc6r\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:28 crc kubenswrapper[4792]: I0216 21:57:28.837694 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:29 crc kubenswrapper[4792]: I0216 21:57:29.927809 4792 generic.go:334] "Generic (PLEG): container finished" podID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerID="bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a" exitCode=0 Feb 16 21:57:29 crc kubenswrapper[4792]: I0216 21:57:29.927942 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerDied","Data":"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a"} Feb 16 21:57:30 crc kubenswrapper[4792]: I0216 21:57:30.958721 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwdsh"] Feb 16 21:57:30 crc kubenswrapper[4792]: I0216 21:57:30.959935 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:30 crc kubenswrapper[4792]: I0216 21:57:30.962494 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 21:57:30 crc kubenswrapper[4792]: I0216 21:57:30.981875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwdsh"] Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.015110 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.017731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.017807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkw7\" (UniqueName: \"kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.120187 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.121031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.121082 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfkw7\" (UniqueName: \"kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.154773 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfkw7\" (UniqueName: \"kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7\") pod \"root-account-create-update-zwdsh\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:31 crc kubenswrapper[4792]: I0216 21:57:31.279517 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.245725 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5q4gs" podUID="fc8ee070-8557-4708-a58f-7e5899ed206b" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:57:33 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:57:33 crc kubenswrapper[4792]: > Feb 16 21:57:33 crc kubenswrapper[4792]: W0216 21:57:33.744669 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc980837_58f8_41b6_97a5_f210e7fd10d0.slice/crio-8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5 WatchSource:0}: Error finding container 8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5: Status 404 returned error can't find the container with id 8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5 Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.747873 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwdsh"] Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.758433 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs-config-fqc6r"] Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.870903 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.888451 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.973480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8982l" event={"ID":"63303797-e14d-4091-ab14-8be69dd506ad","Type":"ContainerStarted","Data":"efde2ac3826c82fd210c05b8b6f844a98d01c07293e97df4cf91cdadb1f2e197"} Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.976456 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwdsh" event={"ID":"fc980837-58f8-41b6-97a5-f210e7fd10d0","Type":"ContainerStarted","Data":"30f031ce65b8dcaf0bda33292307d349ba80835e9778dd80115920fbf32d0d54"} Feb 16 21:57:33 crc kubenswrapper[4792]: I0216 21:57:33.976498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwdsh" event={"ID":"fc980837-58f8-41b6-97a5-f210e7fd10d0","Type":"ContainerStarted","Data":"8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:33.999985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8982l" podStartSLOduration=6.216548037 podStartE2EDuration="21.999970811s" podCreationTimestamp="2026-02-16 21:57:12 +0000 UTC" firstStartedPulling="2026-02-16 21:57:17.529251105 +0000 UTC m=+1170.182529996" lastFinishedPulling="2026-02-16 21:57:33.312673879 +0000 UTC m=+1185.965952770" observedRunningTime="2026-02-16 21:57:33.98951523 +0000 UTC m=+1186.642794121" watchObservedRunningTime="2026-02-16 21:57:33.999970811 +0000 UTC m=+1186.653249702" Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.000071 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerStarted","Data":"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.000310 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.010629 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zwdsh" podStartSLOduration=4.010615678 podStartE2EDuration="4.010615678s" podCreationTimestamp="2026-02-16 21:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:34.008273415 +0000 UTC m=+1186.661552306" watchObservedRunningTime="2026-02-16 21:57:34.010615678 +0000 UTC m=+1186.663894569" Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.013266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f95cab6a-8fca-4a8e-b9eb-3d1751864411","Type":"ContainerStarted","Data":"b88dc8b32e2aa269e57905d68fdddf8eee80ab56825f99ed2c2dc86d59a19efd"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.045677 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"ae655377484967cfab98326767357287d4b7204ca02df5eb29869d7e09ba9d61"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.045717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"ffa34cd152a1ed8d534008404f5bfda45549fd5893449cfd82df0c3c6aa29ac5"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.045730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"a209a05a538a0eb3f79e8ac5920a90d7ab7ca10cf246d0397f7e927f801f7827"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.045741 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-fqc6r" event={"ID":"beec2377-e08b-462a-83fa-0cf42eb76676","Type":"ContainerStarted","Data":"c0ce62912eb4fc2b654bc7b182b3289678fa89fbe4debea59068b87e6ea14293"} Feb 16 21:57:34 crc kubenswrapper[4792]: I0216 21:57:34.059818 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371959.794983 podStartE2EDuration="1m17.059793352s" podCreationTimestamp="2026-02-16 21:56:17 +0000 UTC" firstStartedPulling="2026-02-16 21:56:20.832254446 +0000 UTC m=+1113.485533337" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:34.0389202 +0000 UTC m=+1186.692199091" watchObservedRunningTime="2026-02-16 21:57:34.059793352 +0000 UTC m=+1186.713072243" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.054769 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"e6004c3941133b4ed9ff2a0421f4109486d11cbe22e062e4c0a46e9d63585770"} Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.055336 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"f3320d6696bf9f1c2d90061b38b9405fb02a9299f00f3d1b20d1a25831416566"} Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.055351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e2ada762-95ad-4810-b5da-b4ca59652a45","Type":"ContainerStarted","Data":"79a7130af9ff37c2d8f278954a52826fb32f9c93d72be0f63891b0c9988e430b"} Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.056562 4792 generic.go:334] "Generic (PLEG): container finished" podID="beec2377-e08b-462a-83fa-0cf42eb76676" containerID="24a450af07798fb54df8b438531aaf0da1b9411180deb04b2391292e7bd1515f" exitCode=0 Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.056657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-fqc6r" event={"ID":"beec2377-e08b-462a-83fa-0cf42eb76676","Type":"ContainerDied","Data":"24a450af07798fb54df8b438531aaf0da1b9411180deb04b2391292e7bd1515f"} Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.061443 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc980837-58f8-41b6-97a5-f210e7fd10d0" containerID="30f031ce65b8dcaf0bda33292307d349ba80835e9778dd80115920fbf32d0d54" exitCode=0 Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.061538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwdsh" event={"ID":"fc980837-58f8-41b6-97a5-f210e7fd10d0","Type":"ContainerDied","Data":"30f031ce65b8dcaf0bda33292307d349ba80835e9778dd80115920fbf32d0d54"} Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.103933 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=32.519247094 podStartE2EDuration="41.10390574s" podCreationTimestamp="2026-02-16 21:56:54 +0000 UTC" firstStartedPulling="2026-02-16 21:57:17.349457225 +0000 UTC m=+1170.002736116" lastFinishedPulling="2026-02-16 21:57:25.934115871 +0000 UTC m=+1178.587394762" observedRunningTime="2026-02-16 21:57:35.100679842 +0000 UTC m=+1187.753958753" watchObservedRunningTime="2026-02-16 21:57:35.10390574 +0000 UTC m=+1187.757184631" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.382780 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.384919 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.403980 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.407806 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426365 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426433 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt72\" (UniqueName: \"kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426497 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426710 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.426805 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.528709 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvt72\" (UniqueName: \"kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.528789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.528874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.528902 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.528989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.529016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.530285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.530377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.530915 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.530976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.531489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.548043 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvt72\" (UniqueName: \"kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72\") pod \"dnsmasq-dns-6d5b6d6b67-5rfzs\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:35 crc kubenswrapper[4792]: I0216 21:57:35.845907 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.073152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f95cab6a-8fca-4a8e-b9eb-3d1751864411","Type":"ContainerStarted","Data":"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3"} Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.107927 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=8.6395982 podStartE2EDuration="10.107909167s" podCreationTimestamp="2026-02-16 21:57:26 +0000 UTC" firstStartedPulling="2026-02-16 21:57:33.88813168 +0000 UTC m=+1186.541410571" lastFinishedPulling="2026-02-16 21:57:35.356442647 +0000 UTC m=+1188.009721538" observedRunningTime="2026-02-16 21:57:36.09020804 +0000 UTC m=+1188.743486941" watchObservedRunningTime="2026-02-16 21:57:36.107909167 +0000 UTC m=+1188.761188058" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.458562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.733764 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.740309 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.753895 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts\") pod \"fc980837-58f8-41b6-97a5-f210e7fd10d0\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754063 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcs5x\" (UniqueName: \"kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754180 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754201 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfkw7\" (UniqueName: \"kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7\") pod \"fc980837-58f8-41b6-97a5-f210e7fd10d0\" (UID: \"fc980837-58f8-41b6-97a5-f210e7fd10d0\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754217 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754244 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.754289 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts\") pod \"beec2377-e08b-462a-83fa-0cf42eb76676\" (UID: \"beec2377-e08b-462a-83fa-0cf42eb76676\") " Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.756120 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts" (OuterVolumeSpecName: "scripts") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.756768 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.757147 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run" (OuterVolumeSpecName: "var-run") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.757188 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.757189 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc980837-58f8-41b6-97a5-f210e7fd10d0" (UID: "fc980837-58f8-41b6-97a5-f210e7fd10d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.757689 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.766810 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7" (OuterVolumeSpecName: "kube-api-access-vfkw7") pod "fc980837-58f8-41b6-97a5-f210e7fd10d0" (UID: "fc980837-58f8-41b6-97a5-f210e7fd10d0"). InnerVolumeSpecName "kube-api-access-vfkw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.766960 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x" (OuterVolumeSpecName: "kube-api-access-jcs5x") pod "beec2377-e08b-462a-83fa-0cf42eb76676" (UID: "beec2377-e08b-462a-83fa-0cf42eb76676"). InnerVolumeSpecName "kube-api-access-jcs5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857044 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcs5x\" (UniqueName: \"kubernetes.io/projected/beec2377-e08b-462a-83fa-0cf42eb76676-kube-api-access-jcs5x\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857083 4792 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857092 4792 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857102 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfkw7\" (UniqueName: \"kubernetes.io/projected/fc980837-58f8-41b6-97a5-f210e7fd10d0-kube-api-access-vfkw7\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857112 4792 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/beec2377-e08b-462a-83fa-0cf42eb76676-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857120 4792 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857131 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beec2377-e08b-462a-83fa-0cf42eb76676-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:36 crc kubenswrapper[4792]: I0216 21:57:36.857139 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc980837-58f8-41b6-97a5-f210e7fd10d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.082041 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-fqc6r" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.082030 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-fqc6r" event={"ID":"beec2377-e08b-462a-83fa-0cf42eb76676","Type":"ContainerDied","Data":"c0ce62912eb4fc2b654bc7b182b3289678fa89fbe4debea59068b87e6ea14293"} Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.083107 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ce62912eb4fc2b654bc7b182b3289678fa89fbe4debea59068b87e6ea14293" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.083643 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerID="700051e2b19821cea1bf4617d65e3ba1a67f335772a6b9e9a09a52d49d802dd2" exitCode=0 Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.083681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" event={"ID":"3c6f592c-48f6-45db-8a27-caf7ff35b7ce","Type":"ContainerDied","Data":"700051e2b19821cea1bf4617d65e3ba1a67f335772a6b9e9a09a52d49d802dd2"} Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.084006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" event={"ID":"3c6f592c-48f6-45db-8a27-caf7ff35b7ce","Type":"ContainerStarted","Data":"ccffd75737384b886eb017be73b55393dc1e367844061983a516156041ef5273"} Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.085175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwdsh" event={"ID":"fc980837-58f8-41b6-97a5-f210e7fd10d0","Type":"ContainerDied","Data":"8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5"} Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.085192 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6da3d8aca1e05c8b7638143d1d5ce517ee72858fd58cd11bd0049cda648be5" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.085288 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwdsh" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.843982 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q4gs-config-fqc6r"] Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.854172 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5q4gs-config-fqc6r"] Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.956278 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5q4gs-config-95g4s"] Feb 16 21:57:37 crc kubenswrapper[4792]: E0216 21:57:37.956955 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc980837-58f8-41b6-97a5-f210e7fd10d0" containerName="mariadb-account-create-update" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.957033 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc980837-58f8-41b6-97a5-f210e7fd10d0" containerName="mariadb-account-create-update" Feb 16 21:57:37 crc kubenswrapper[4792]: E0216 21:57:37.957129 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beec2377-e08b-462a-83fa-0cf42eb76676" containerName="ovn-config" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.957186 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="beec2377-e08b-462a-83fa-0cf42eb76676" containerName="ovn-config" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.957480 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="beec2377-e08b-462a-83fa-0cf42eb76676" containerName="ovn-config" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.957571 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc980837-58f8-41b6-97a5-f210e7fd10d0" containerName="mariadb-account-create-update" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.958361 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.964159 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.968640 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs-config-95g4s"] Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.980921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.980971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.980996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.981077 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.981189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:37 crc kubenswrapper[4792]: I0216 21:57:37.981242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tlhj\" (UniqueName: \"kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.054203 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beec2377-e08b-462a-83fa-0cf42eb76676" path="/var/lib/kubelet/pods/beec2377-e08b-462a-83fa-0cf42eb76676/volumes" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.082779 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.082834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.082897 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.082987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.083041 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tlhj\" (UniqueName: \"kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.083120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.083444 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.083784 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.083925 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.084150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.084601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.096338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" event={"ID":"3c6f592c-48f6-45db-8a27-caf7ff35b7ce","Type":"ContainerStarted","Data":"c874999e62700a5e133d4d3e676eda5a259fc1d73f1ee6fd3ebf6b12e843e528"} Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.097492 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.110753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tlhj\" (UniqueName: \"kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj\") pod \"ovn-controller-5q4gs-config-95g4s\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.128819 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" podStartSLOduration=3.128796178 podStartE2EDuration="3.128796178s" podCreationTimestamp="2026-02-16 21:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:38.122314974 +0000 UTC m=+1190.775593875" watchObservedRunningTime="2026-02-16 21:57:38.128796178 +0000 UTC m=+1190.782075069" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.242095 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5q4gs" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.297858 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:38 crc kubenswrapper[4792]: I0216 21:57:38.766885 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5q4gs-config-95g4s"] Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.107266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-95g4s" event={"ID":"9d497c8a-6556-4d68-8e28-abeec8cb4c3b","Type":"ContainerStarted","Data":"24e156cff974b6adba840e6304fa4d9473606ff354cf5a0d46936139e11b20bb"} Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.108380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-95g4s" event={"ID":"9d497c8a-6556-4d68-8e28-abeec8cb4c3b","Type":"ContainerStarted","Data":"c1cacdd2c9839e4a2e4dfb9c4c9d41483f02656fb98bebe180cfde9bcc57cab3"} Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.132297 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5q4gs-config-95g4s" podStartSLOduration=2.132279302 podStartE2EDuration="2.132279302s" podCreationTimestamp="2026-02-16 21:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:39.127117163 +0000 UTC m=+1191.780396054" watchObservedRunningTime="2026-02-16 21:57:39.132279302 +0000 UTC m=+1191.785558193" Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.311892 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.330758 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 21:57:39 crc kubenswrapper[4792]: I0216 21:57:39.634907 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:57:40 crc kubenswrapper[4792]: I0216 21:57:40.120848 4792 generic.go:334] "Generic (PLEG): container finished" podID="9d497c8a-6556-4d68-8e28-abeec8cb4c3b" containerID="24e156cff974b6adba840e6304fa4d9473606ff354cf5a0d46936139e11b20bb" exitCode=0 Feb 16 21:57:40 crc kubenswrapper[4792]: I0216 21:57:40.120954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-95g4s" event={"ID":"9d497c8a-6556-4d68-8e28-abeec8cb4c3b","Type":"ContainerDied","Data":"24e156cff974b6adba840e6304fa4d9473606ff354cf5a0d46936139e11b20bb"} Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.014998 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.018181 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.131780 4792 generic.go:334] "Generic (PLEG): container finished" podID="63303797-e14d-4091-ab14-8be69dd506ad" containerID="efde2ac3826c82fd210c05b8b6f844a98d01c07293e97df4cf91cdadb1f2e197" exitCode=0 Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.131881 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8982l" event={"ID":"63303797-e14d-4091-ab14-8be69dd506ad","Type":"ContainerDied","Data":"efde2ac3826c82fd210c05b8b6f844a98d01c07293e97df4cf91cdadb1f2e197"} Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.133633 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.604180 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778686 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778775 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778823 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778840 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tlhj\" (UniqueName: \"kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj\") pod \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\" (UID: \"9d497c8a-6556-4d68-8e28-abeec8cb4c3b\") " Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778939 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.778977 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run" (OuterVolumeSpecName: "var-run") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.779382 4792 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.779400 4792 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.779408 4792 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.779519 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.779842 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts" (OuterVolumeSpecName: "scripts") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.785209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj" (OuterVolumeSpecName: "kube-api-access-6tlhj") pod "9d497c8a-6556-4d68-8e28-abeec8cb4c3b" (UID: "9d497c8a-6556-4d68-8e28-abeec8cb4c3b"). InnerVolumeSpecName "kube-api-access-6tlhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.880659 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.880691 4792 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:41 crc kubenswrapper[4792]: I0216 21:57:41.880702 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tlhj\" (UniqueName: \"kubernetes.io/projected/9d497c8a-6556-4d68-8e28-abeec8cb4c3b-kube-api-access-6tlhj\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.148093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5q4gs-config-95g4s" event={"ID":"9d497c8a-6556-4d68-8e28-abeec8cb4c3b","Type":"ContainerDied","Data":"c1cacdd2c9839e4a2e4dfb9c4c9d41483f02656fb98bebe180cfde9bcc57cab3"} Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.148151 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cacdd2c9839e4a2e4dfb9c4c9d41483f02656fb98bebe180cfde9bcc57cab3" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.148403 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5q4gs-config-95g4s" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.229161 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5q4gs-config-95g4s"] Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.239286 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5q4gs-config-95g4s"] Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.648599 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8982l" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.824119 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data\") pod \"63303797-e14d-4091-ab14-8be69dd506ad\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.824289 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data\") pod \"63303797-e14d-4091-ab14-8be69dd506ad\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.824351 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqk6b\" (UniqueName: \"kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b\") pod \"63303797-e14d-4091-ab14-8be69dd506ad\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.824546 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle\") pod \"63303797-e14d-4091-ab14-8be69dd506ad\" (UID: \"63303797-e14d-4091-ab14-8be69dd506ad\") " Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.829584 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b" (OuterVolumeSpecName: "kube-api-access-qqk6b") pod "63303797-e14d-4091-ab14-8be69dd506ad" (UID: "63303797-e14d-4091-ab14-8be69dd506ad"). InnerVolumeSpecName "kube-api-access-qqk6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.829758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "63303797-e14d-4091-ab14-8be69dd506ad" (UID: "63303797-e14d-4091-ab14-8be69dd506ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.855513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63303797-e14d-4091-ab14-8be69dd506ad" (UID: "63303797-e14d-4091-ab14-8be69dd506ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.896991 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data" (OuterVolumeSpecName: "config-data") pod "63303797-e14d-4091-ab14-8be69dd506ad" (UID: "63303797-e14d-4091-ab14-8be69dd506ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.927131 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.927398 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.927428 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqk6b\" (UniqueName: \"kubernetes.io/projected/63303797-e14d-4091-ab14-8be69dd506ad-kube-api-access-qqk6b\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:42 crc kubenswrapper[4792]: I0216 21:57:42.927439 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63303797-e14d-4091-ab14-8be69dd506ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.164489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8982l" event={"ID":"63303797-e14d-4091-ab14-8be69dd506ad","Type":"ContainerDied","Data":"c34c0e230a2f539fffe9e8408904859de9b0b95882e429860e78bc9fc8f09898"} Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.164537 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c34c0e230a2f539fffe9e8408904859de9b0b95882e429860e78bc9fc8f09898" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.164620 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8982l" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.622815 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.623322 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="dnsmasq-dns" containerID="cri-o://c874999e62700a5e133d4d3e676eda5a259fc1d73f1ee6fd3ebf6b12e843e528" gracePeriod=10 Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.624807 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.655797 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:57:43 crc kubenswrapper[4792]: E0216 21:57:43.657499 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63303797-e14d-4091-ab14-8be69dd506ad" containerName="glance-db-sync" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.657519 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="63303797-e14d-4091-ab14-8be69dd506ad" containerName="glance-db-sync" Feb 16 21:57:43 crc kubenswrapper[4792]: E0216 21:57:43.657546 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d497c8a-6556-4d68-8e28-abeec8cb4c3b" containerName="ovn-config" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.657552 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d497c8a-6556-4d68-8e28-abeec8cb4c3b" containerName="ovn-config" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.657892 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d497c8a-6556-4d68-8e28-abeec8cb4c3b" containerName="ovn-config" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.657918 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="63303797-e14d-4091-ab14-8be69dd506ad" containerName="glance-db-sync" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.665817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.672795 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.853580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.854023 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.854144 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.854193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.854217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sj2s\" (UniqueName: \"kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.854240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955907 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sj2s\" (UniqueName: \"kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.955943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.956922 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.957133 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.959776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.960807 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.961253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.980066 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sj2s\" (UniqueName: \"kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s\") pod \"dnsmasq-dns-895cf5cf-bjcg8\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.989641 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.989959 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="prometheus" containerID="cri-o://d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6" gracePeriod=600 Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.990117 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="thanos-sidecar" containerID="cri-o://ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e" gracePeriod=600 Feb 16 21:57:43 crc kubenswrapper[4792]: I0216 21:57:43.990169 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="config-reloader" containerID="cri-o://2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff" gracePeriod=600 Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.013750 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.054446 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d497c8a-6556-4d68-8e28-abeec8cb4c3b" path="/var/lib/kubelet/pods/9d497c8a-6556-4d68-8e28-abeec8cb4c3b/volumes" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.211660 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerID="ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e" exitCode=0 Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.211936 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerID="d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6" exitCode=0 Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.211880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerDied","Data":"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e"} Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.212020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerDied","Data":"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6"} Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.215691 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerID="c874999e62700a5e133d4d3e676eda5a259fc1d73f1ee6fd3ebf6b12e843e528" exitCode=0 Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.215739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" event={"ID":"3c6f592c-48f6-45db-8a27-caf7ff35b7ce","Type":"ContainerDied","Data":"c874999e62700a5e133d4d3e676eda5a259fc1d73f1ee6fd3ebf6b12e843e528"} Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.215771 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" event={"ID":"3c6f592c-48f6-45db-8a27-caf7ff35b7ce","Type":"ContainerDied","Data":"ccffd75737384b886eb017be73b55393dc1e367844061983a516156041ef5273"} Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.215788 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccffd75737384b886eb017be73b55393dc1e367844061983a516156041ef5273" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.225856 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.364533 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.364688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.364758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.364800 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.364907 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvt72\" (UniqueName: \"kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.365061 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb\") pod \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\" (UID: \"3c6f592c-48f6-45db-8a27-caf7ff35b7ce\") " Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.375334 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72" (OuterVolumeSpecName: "kube-api-access-gvt72") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "kube-api-access-gvt72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.425460 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.446173 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config" (OuterVolumeSpecName: "config") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.456906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.457490 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.460367 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c6f592c-48f6-45db-8a27-caf7ff35b7ce" (UID: "3c6f592c-48f6-45db-8a27-caf7ff35b7ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468331 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468378 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468397 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468409 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvt72\" (UniqueName: \"kubernetes.io/projected/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-kube-api-access-gvt72\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468423 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.468435 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c6f592c-48f6-45db-8a27-caf7ff35b7ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.552203 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:57:44 crc kubenswrapper[4792]: W0216 21:57:44.560532 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d82eba4_4763_4dc0_a3f3_5236c0119764.slice/crio-171865ec70bc21797319d26d6b32af2c8d863379945315657212153bb01025c0 WatchSource:0}: Error finding container 171865ec70bc21797319d26d6b32af2c8d863379945315657212153bb01025c0: Status 404 returned error can't find the container with id 171865ec70bc21797319d26d6b32af2c8d863379945315657212153bb01025c0 Feb 16 21:57:44 crc kubenswrapper[4792]: I0216 21:57:44.984160 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.079868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080106 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080126 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080159 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mvz7\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080190 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080345 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080887 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets\") pod \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\" (UID: \"d8bd9c3b-0357-4270-8e43-6d6a3da3534d\") " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080553 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.080713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.081560 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.081584 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.081578 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.085783 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.086223 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out" (OuterVolumeSpecName: "config-out") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.086573 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config" (OuterVolumeSpecName: "config") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.093317 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7" (OuterVolumeSpecName: "kube-api-access-7mvz7") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "kube-api-access-7mvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.093707 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.103701 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.113217 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config" (OuterVolumeSpecName: "web-config") pod "d8bd9c3b-0357-4270-8e43-6d6a3da3534d" (UID: "d8bd9c3b-0357-4270-8e43-6d6a3da3534d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183312 4792 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183343 4792 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183352 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183362 4792 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183388 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") on node \"crc\" " Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183398 4792 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183408 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.183417 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mvz7\" (UniqueName: \"kubernetes.io/projected/d8bd9c3b-0357-4270-8e43-6d6a3da3534d-kube-api-access-7mvz7\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.204375 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.204517 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5") on node "crc" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.232560 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerID="2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff" exitCode=0 Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.233076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerDied","Data":"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff"} Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.234333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d8bd9c3b-0357-4270-8e43-6d6a3da3534d","Type":"ContainerDied","Data":"4bbba6826bb12f8c042a6d488233583006d9b51f95bd062cea3dd055ac003dd5"} Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.234410 4792 scope.go:117] "RemoveContainer" containerID="ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.235046 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.235846 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerID="f50ba233704645564d73dbb6705ac3cf134773655156b8bdb936a1e1316e2cf7" exitCode=0 Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.235906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" event={"ID":"4d82eba4-4763-4dc0-a3f3-5236c0119764","Type":"ContainerDied","Data":"f50ba233704645564d73dbb6705ac3cf134773655156b8bdb936a1e1316e2cf7"} Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.235927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" event={"ID":"4d82eba4-4763-4dc0-a3f3-5236c0119764","Type":"ContainerStarted","Data":"171865ec70bc21797319d26d6b32af2c8d863379945315657212153bb01025c0"} Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.236888 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-5rfzs" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.257536 4792 scope.go:117] "RemoveContainer" containerID="2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.286099 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.351493 4792 scope.go:117] "RemoveContainer" containerID="d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.363889 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.397739 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-5rfzs"] Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.409487 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.430930 4792 scope.go:117] "RemoveContainer" containerID="41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.432706 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.441993 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442456 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="init" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442475 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="init" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442488 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="dnsmasq-dns" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442494 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="dnsmasq-dns" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442514 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="thanos-sidecar" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442520 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="thanos-sidecar" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442533 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="prometheus" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442538 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="prometheus" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442547 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="init-config-reloader" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442554 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="init-config-reloader" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.442565 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="config-reloader" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442570 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="config-reloader" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442773 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" containerName="dnsmasq-dns" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442799 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="thanos-sidecar" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442811 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="prometheus" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.442820 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" containerName="config-reloader" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.444484 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.448333 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.448586 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.448762 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbcwk" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.448903 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.449075 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.449219 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.449368 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.450563 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.454959 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.456939 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.469146 4792 scope.go:117] "RemoveContainer" containerID="ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.470957 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e\": container with ID starting with ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e not found: ID does not exist" containerID="ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.470999 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e"} err="failed to get container status \"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e\": rpc error: code = NotFound desc = could not find container \"ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e\": container with ID starting with ba4b058f02fc512c21883f1b2742103209cfbb3926fb0b15e6088069737a2a0e not found: ID does not exist" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.471027 4792 scope.go:117] "RemoveContainer" containerID="2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.472312 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff\": container with ID starting with 2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff not found: ID does not exist" containerID="2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.472358 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff"} err="failed to get container status \"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff\": rpc error: code = NotFound desc = could not find container \"2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff\": container with ID starting with 2b3f5f368975180180d4beb706124e401fa08f30b67e31678860e6be7e4c12ff not found: ID does not exist" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.472388 4792 scope.go:117] "RemoveContainer" containerID="d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.472826 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6\": container with ID starting with d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6 not found: ID does not exist" containerID="d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.472860 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6"} err="failed to get container status \"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6\": rpc error: code = NotFound desc = could not find container \"d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6\": container with ID starting with d676274c795ddf7805d0f83849309052d290aa95b5b0f131e570f4a057ff1ce6 not found: ID does not exist" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.472878 4792 scope.go:117] "RemoveContainer" containerID="41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257" Feb 16 21:57:45 crc kubenswrapper[4792]: E0216 21:57:45.473068 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257\": container with ID starting with 41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257 not found: ID does not exist" containerID="41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.473092 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257"} err="failed to get container status \"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257\": rpc error: code = NotFound desc = could not find container \"41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257\": container with ID starting with 41716793b06e2d807add08dd1ba13c8286af61cce90438ab968855b69572e257 not found: ID does not exist" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.592637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.592977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593106 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593535 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593568 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tccs6\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-kube-api-access-tccs6\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.593776 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tccs6\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-kube-api-access-tccs6\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695578 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695675 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695745 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695764 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.695809 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.696762 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.696890 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.697029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.697832 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.697864 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0c450b835612ea0ffc6154278231fd6293d2b9aab214db327cd461039eaa73be/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.699485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.699516 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.699902 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.700136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.700510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.701203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.701698 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.702505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.714870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tccs6\" (UniqueName: \"kubernetes.io/projected/8ee2931a-9b3b-4568-b83b-9846e6f9c65a-kube-api-access-tccs6\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.738064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-356ab99e-773a-4f96-8cf3-1d6fe31579b5\") pod \"prometheus-metric-storage-0\" (UID: \"8ee2931a-9b3b-4568-b83b-9846e6f9c65a\") " pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:45 crc kubenswrapper[4792]: I0216 21:57:45.768238 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.042661 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6f592c-48f6-45db-8a27-caf7ff35b7ce" path="/var/lib/kubelet/pods/3c6f592c-48f6-45db-8a27-caf7ff35b7ce/volumes" Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.043775 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8bd9c3b-0357-4270-8e43-6d6a3da3534d" path="/var/lib/kubelet/pods/d8bd9c3b-0357-4270-8e43-6d6a3da3534d/volumes" Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.245168 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.249189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" event={"ID":"4d82eba4-4763-4dc0-a3f3-5236c0119764","Type":"ContainerStarted","Data":"9c258bae3f1eae089555296a79d1bf8dc912bccf1afd5fb59fa3f995fea49a65"} Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.249404 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:46 crc kubenswrapper[4792]: W0216 21:57:46.249764 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee2931a_9b3b_4568_b83b_9846e6f9c65a.slice/crio-1d51d089eb98edf60dda476aaf5da42f9b4482d0ba6638bd9c38f4f13d302907 WatchSource:0}: Error finding container 1d51d089eb98edf60dda476aaf5da42f9b4482d0ba6638bd9c38f4f13d302907: Status 404 returned error can't find the container with id 1d51d089eb98edf60dda476aaf5da42f9b4482d0ba6638bd9c38f4f13d302907 Feb 16 21:57:46 crc kubenswrapper[4792]: I0216 21:57:46.275410 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" podStartSLOduration=3.275392312 podStartE2EDuration="3.275392312s" podCreationTimestamp="2026-02-16 21:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:46.272185436 +0000 UTC m=+1198.925464327" watchObservedRunningTime="2026-02-16 21:57:46.275392312 +0000 UTC m=+1198.928671203" Feb 16 21:57:47 crc kubenswrapper[4792]: I0216 21:57:47.264137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerStarted","Data":"1d51d089eb98edf60dda476aaf5da42f9b4482d0ba6638bd9c38f4f13d302907"} Feb 16 21:57:49 crc kubenswrapper[4792]: I0216 21:57:49.314004 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 21:57:49 crc kubenswrapper[4792]: I0216 21:57:49.571927 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 21:57:49 crc kubenswrapper[4792]: I0216 21:57:49.954504 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-x996w"] Feb 16 21:57:49 crc kubenswrapper[4792]: I0216 21:57:49.955825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x996w" Feb 16 21:57:49 crc kubenswrapper[4792]: I0216 21:57:49.970369 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x996w"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.049565 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-p6tpn"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.051183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.068572 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p6tpn"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.097785 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-989d-account-create-update-5x2fg"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.099252 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.104732 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.110651 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.110770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzbg\" (UniqueName: \"kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.113141 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-989d-account-create-update-5x2fg"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.169983 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f6a4-account-create-update-tnht4"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.171399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.173622 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.181230 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6a4-account-create-update-tnht4"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.212894 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqns8\" (UniqueName: \"kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.213174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrngb\" (UniqueName: \"kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.213260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.213307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzbg\" (UniqueName: \"kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.213541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.213865 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.214372 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.287893 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f4de-account-create-update-n2r2d"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.290714 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.296399 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320191 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqns8\" (UniqueName: \"kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngb\" (UniqueName: \"kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320411 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24x2\" (UniqueName: \"kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.320449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.321120 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.325237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.334925 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzbg\" (UniqueName: \"kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg\") pod \"cinder-db-create-x996w\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.339057 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f4de-account-create-update-n2r2d"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.349957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerStarted","Data":"55303a5b781fbc11675f2d5fb4fb5d225f7fa4bb86d1066b834e960e55620605"} Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.384344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqns8\" (UniqueName: \"kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8\") pod \"heat-989d-account-create-update-5x2fg\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.387532 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sjs8x"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.388809 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.395402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngb\" (UniqueName: \"kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb\") pod \"heat-db-create-p6tpn\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.398899 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.399036 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.399108 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.399248 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gjvkz" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.409065 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sjs8x"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.416255 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r7rzq"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.417700 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.421886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.431733 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.431873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.431922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbfm\" (UniqueName: \"kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.431991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24x2\" (UniqueName: \"kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.432470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.441071 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r7rzq"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.465799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24x2\" (UniqueName: \"kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2\") pod \"cinder-f6a4-account-create-update-tnht4\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.487128 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.499473 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4a27-account-create-update-ljhjm"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.501199 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.503052 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.508486 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4a27-account-create-update-ljhjm"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534203 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnwh\" (UniqueName: \"kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86hgl\" (UniqueName: \"kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534338 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534404 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534422 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534466 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbfm\" (UniqueName: \"kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.534556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.536784 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.556757 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6hn4j"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.561377 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.577275 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6hn4j"] Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.583022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x996w" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.588816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbfm\" (UniqueName: \"kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm\") pod \"barbican-f4de-account-create-update-n2r2d\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlct4\" (UniqueName: \"kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636205 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636293 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636354 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnwh\" (UniqueName: \"kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d82g7\" (UniqueName: \"kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86hgl\" (UniqueName: \"kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.636455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.637084 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.643929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.644142 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.665019 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.666167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86hgl\" (UniqueName: \"kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl\") pod \"keystone-db-sync-sjs8x\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.670674 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.680533 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnwh\" (UniqueName: \"kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh\") pod \"barbican-db-create-r7rzq\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.740504 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d82g7\" (UniqueName: \"kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.740961 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlct4\" (UniqueName: \"kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.741002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.741082 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.742912 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.743099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.761780 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlct4\" (UniqueName: \"kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4\") pod \"neutron-db-create-6hn4j\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.762561 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d82g7\" (UniqueName: \"kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7\") pod \"neutron-4a27-account-create-update-ljhjm\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.878776 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.933086 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.952582 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:50 crc kubenswrapper[4792]: I0216 21:57:50.963008 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.156352 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x996w"] Feb 16 21:57:51 crc kubenswrapper[4792]: W0216 21:57:51.169007 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcf27831_30f8_406a_a277_c6e61987fe35.slice/crio-a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76 WatchSource:0}: Error finding container a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76: Status 404 returned error can't find the container with id a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76 Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.170497 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6a4-account-create-update-tnht4"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.330420 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-989d-account-create-update-5x2fg"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.369004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x996w" event={"ID":"b80bdd05-0def-4f41-a14a-5ad83cd6428f","Type":"ContainerStarted","Data":"c11971621e5508a4a40eb4ac57506c8c0cd457350ed7b90d0b363d9a54411289"} Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.378511 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-989d-account-create-update-5x2fg" event={"ID":"be8ad371-835d-4087-b6c5-00576bc60ab8","Type":"ContainerStarted","Data":"a85e0f81a2e9bf479c2bbfc12c1942cff28d90cc8a733dfa5c561ceb498329ff"} Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.386078 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6a4-account-create-update-tnht4" event={"ID":"bcf27831-30f8-406a-a277-c6e61987fe35","Type":"ContainerStarted","Data":"a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76"} Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.454872 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p6tpn"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.497093 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f4de-account-create-update-n2r2d"] Feb 16 21:57:51 crc kubenswrapper[4792]: W0216 21:57:51.524588 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7463b1e3_c90a_4525_a6d4_6d7892578aae.slice/crio-02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab WatchSource:0}: Error finding container 02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab: Status 404 returned error can't find the container with id 02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.674226 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r7rzq"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.716764 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sjs8x"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.737708 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4a27-account-create-update-ljhjm"] Feb 16 21:57:51 crc kubenswrapper[4792]: I0216 21:57:51.944521 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6hn4j"] Feb 16 21:57:52 crc kubenswrapper[4792]: W0216 21:57:52.006792 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb9482b7_b9a0_4114_92d2_be2276447412.slice/crio-5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389 WatchSource:0}: Error finding container 5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389: Status 404 returned error can't find the container with id 5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.105458 4792 scope.go:117] "RemoveContainer" containerID="f401645f65ad20f7743361479f9dae53b36834780df573383f45cdc5183474a2" Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.138760 4792 scope.go:117] "RemoveContainer" containerID="07342f312f2865377f57d823f104651c54354b1926128f205bb5c3bf519bb473" Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.267176 4792 scope.go:117] "RemoveContainer" containerID="e6a580920ed7119b50f2b03ab2494e66b40e9282622ee23b2c17bd1e1df2569b" Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.400241 4792 generic.go:334] "Generic (PLEG): container finished" podID="4034f818-c02e-451d-92ae-ebf4deb873ab" containerID="272562e601ee35eb182c445747fc389a6c22272eee06ea75d29521a0c0774033" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.400315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7rzq" event={"ID":"4034f818-c02e-451d-92ae-ebf4deb873ab","Type":"ContainerDied","Data":"272562e601ee35eb182c445747fc389a6c22272eee06ea75d29521a0c0774033"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.400342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7rzq" event={"ID":"4034f818-c02e-451d-92ae-ebf4deb873ab","Type":"ContainerStarted","Data":"56b1eb2a3cc30753977706ff1c08bef7fa7a26504cb7cedf7ac72ca7ca23defa"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.404349 4792 generic.go:334] "Generic (PLEG): container finished" podID="bcf27831-30f8-406a-a277-c6e61987fe35" containerID="70ebae63c739e4cbe7307aaeb7839559d1ff4df39eab94e8efa35058cc0a18c9" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.404448 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6a4-account-create-update-tnht4" event={"ID":"bcf27831-30f8-406a-a277-c6e61987fe35","Type":"ContainerDied","Data":"70ebae63c739e4cbe7307aaeb7839559d1ff4df39eab94e8efa35058cc0a18c9"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.406836 4792 generic.go:334] "Generic (PLEG): container finished" podID="646151e2-5537-4de8-a366-f2e2aa64a307" containerID="9140ed259305e859ec4639f76119ac046ec64742dfcbe6946acdb68b95ba7a55" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.406869 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p6tpn" event={"ID":"646151e2-5537-4de8-a366-f2e2aa64a307","Type":"ContainerDied","Data":"9140ed259305e859ec4639f76119ac046ec64742dfcbe6946acdb68b95ba7a55"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.406896 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p6tpn" event={"ID":"646151e2-5537-4de8-a366-f2e2aa64a307","Type":"ContainerStarted","Data":"22553ed26e8a18c67a03bbcdfb2b7c978010c8135899f34647e6b6190b36f153"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.408656 4792 generic.go:334] "Generic (PLEG): container finished" podID="b80bdd05-0def-4f41-a14a-5ad83cd6428f" containerID="8cdb18e83507104ae90346580353c7398167b9112d9bd849b693bad27f548046" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.408723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x996w" event={"ID":"b80bdd05-0def-4f41-a14a-5ad83cd6428f","Type":"ContainerDied","Data":"8cdb18e83507104ae90346580353c7398167b9112d9bd849b693bad27f548046"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.410689 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sjs8x" event={"ID":"2b77bea6-4e1c-42d4-a33c-da52abd756a6","Type":"ContainerStarted","Data":"d22ece39bc3316fb1405eff61b689d58eeb06a8e60da5e05981ba721e20d7afb"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.415536 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6hn4j" event={"ID":"eb9482b7-b9a0-4114-92d2-be2276447412","Type":"ContainerStarted","Data":"029bae05d728ad2b343e88dbf0d0ffaa3e2ec37322443cf9db14af4b0d14ddb6"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.415614 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6hn4j" event={"ID":"eb9482b7-b9a0-4114-92d2-be2276447412","Type":"ContainerStarted","Data":"5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.420957 4792 generic.go:334] "Generic (PLEG): container finished" podID="7463b1e3-c90a-4525-a6d4-6d7892578aae" containerID="0da8855b787c27d78ff7c5127b606db896f4aac12c638aa59eb2510bbe276e34" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.421121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f4de-account-create-update-n2r2d" event={"ID":"7463b1e3-c90a-4525-a6d4-6d7892578aae","Type":"ContainerDied","Data":"0da8855b787c27d78ff7c5127b606db896f4aac12c638aa59eb2510bbe276e34"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.421143 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f4de-account-create-update-n2r2d" event={"ID":"7463b1e3-c90a-4525-a6d4-6d7892578aae","Type":"ContainerStarted","Data":"02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.424173 4792 generic.go:334] "Generic (PLEG): container finished" podID="88ee292f-3c7f-4131-8f57-682fe8679f15" containerID="73edc24681473eecf50dfa8dbe83f85015c9c7bf6eec8a79117e2599acec5666" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.424241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4a27-account-create-update-ljhjm" event={"ID":"88ee292f-3c7f-4131-8f57-682fe8679f15","Type":"ContainerDied","Data":"73edc24681473eecf50dfa8dbe83f85015c9c7bf6eec8a79117e2599acec5666"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.424272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4a27-account-create-update-ljhjm" event={"ID":"88ee292f-3c7f-4131-8f57-682fe8679f15","Type":"ContainerStarted","Data":"8175b99f1cfb2dac5b77951aea7a1927b7c29bfbfac7485693eb14050fdfd9eb"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.426809 4792 generic.go:334] "Generic (PLEG): container finished" podID="be8ad371-835d-4087-b6c5-00576bc60ab8" containerID="fb4c5a8b42e6c0cec9da4150d6f7e7ab23fc96ee7b9a286cbb3aee474bcf29b2" exitCode=0 Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.426859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-989d-account-create-update-5x2fg" event={"ID":"be8ad371-835d-4087-b6c5-00576bc60ab8","Type":"ContainerDied","Data":"fb4c5a8b42e6c0cec9da4150d6f7e7ab23fc96ee7b9a286cbb3aee474bcf29b2"} Feb 16 21:57:52 crc kubenswrapper[4792]: I0216 21:57:52.489259 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-6hn4j" podStartSLOduration=2.489243538 podStartE2EDuration="2.489243538s" podCreationTimestamp="2026-02-16 21:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:57:52.485289432 +0000 UTC m=+1205.138568323" watchObservedRunningTime="2026-02-16 21:57:52.489243538 +0000 UTC m=+1205.142522429" Feb 16 21:57:53 crc kubenswrapper[4792]: I0216 21:57:53.441624 4792 generic.go:334] "Generic (PLEG): container finished" podID="eb9482b7-b9a0-4114-92d2-be2276447412" containerID="029bae05d728ad2b343e88dbf0d0ffaa3e2ec37322443cf9db14af4b0d14ddb6" exitCode=0 Feb 16 21:57:53 crc kubenswrapper[4792]: I0216 21:57:53.442303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6hn4j" event={"ID":"eb9482b7-b9a0-4114-92d2-be2276447412","Type":"ContainerDied","Data":"029bae05d728ad2b343e88dbf0d0ffaa3e2ec37322443cf9db14af4b0d14ddb6"} Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.015877 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.105056 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.105290 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="dnsmasq-dns" containerID="cri-o://29d0cbe4aa297ca43eb6c9e7c7a2320129194b4520513b5b44bef2167689fabe" gracePeriod=10 Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.455812 4792 generic.go:334] "Generic (PLEG): container finished" podID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerID="29d0cbe4aa297ca43eb6c9e7c7a2320129194b4520513b5b44bef2167689fabe" exitCode=0 Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.455898 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" event={"ID":"2df2814e-70ee-40f3-9efe-4d7cfe16bd38","Type":"ContainerDied","Data":"29d0cbe4aa297ca43eb6c9e7c7a2320129194b4520513b5b44bef2167689fabe"} Feb 16 21:57:54 crc kubenswrapper[4792]: I0216 21:57:54.817905 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Feb 16 21:57:56 crc kubenswrapper[4792]: I0216 21:57:56.478360 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ee2931a-9b3b-4568-b83b-9846e6f9c65a" containerID="55303a5b781fbc11675f2d5fb4fb5d225f7fa4bb86d1066b834e960e55620605" exitCode=0 Feb 16 21:57:56 crc kubenswrapper[4792]: I0216 21:57:56.478449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerDied","Data":"55303a5b781fbc11675f2d5fb4fb5d225f7fa4bb86d1066b834e960e55620605"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.186087 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x996w" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.275890 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.293488 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.301026 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.326234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knzbg\" (UniqueName: \"kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg\") pod \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.326352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts\") pod \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\" (UID: \"b80bdd05-0def-4f41-a14a-5ad83cd6428f\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.327097 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.327478 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b80bdd05-0def-4f41-a14a-5ad83cd6428f" (UID: "b80bdd05-0def-4f41-a14a-5ad83cd6428f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.336065 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.336111 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg" (OuterVolumeSpecName: "kube-api-access-knzbg") pod "b80bdd05-0def-4f41-a14a-5ad83cd6428f" (UID: "b80bdd05-0def-4f41-a14a-5ad83cd6428f"). InnerVolumeSpecName "kube-api-access-knzbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.344378 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.355564 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.375566 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.427965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts\") pod \"4034f818-c02e-451d-92ae-ebf4deb873ab\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428021 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts\") pod \"bcf27831-30f8-406a-a277-c6e61987fe35\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428046 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbfm\" (UniqueName: \"kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm\") pod \"7463b1e3-c90a-4525-a6d4-6d7892578aae\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428063 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvnwh\" (UniqueName: \"kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh\") pod \"4034f818-c02e-451d-92ae-ebf4deb873ab\" (UID: \"4034f818-c02e-451d-92ae-ebf4deb873ab\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428083 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts\") pod \"eb9482b7-b9a0-4114-92d2-be2276447412\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d82g7\" (UniqueName: \"kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7\") pod \"88ee292f-3c7f-4131-8f57-682fe8679f15\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts\") pod \"be8ad371-835d-4087-b6c5-00576bc60ab8\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqns8\" (UniqueName: \"kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8\") pod \"be8ad371-835d-4087-b6c5-00576bc60ab8\" (UID: \"be8ad371-835d-4087-b6c5-00576bc60ab8\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts\") pod \"88ee292f-3c7f-4131-8f57-682fe8679f15\" (UID: \"88ee292f-3c7f-4131-8f57-682fe8679f15\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428237 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlct4\" (UniqueName: \"kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4\") pod \"eb9482b7-b9a0-4114-92d2-be2276447412\" (UID: \"eb9482b7-b9a0-4114-92d2-be2276447412\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428293 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts\") pod \"7463b1e3-c90a-4525-a6d4-6d7892578aae\" (UID: \"7463b1e3-c90a-4525-a6d4-6d7892578aae\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428337 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s24x2\" (UniqueName: \"kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2\") pod \"bcf27831-30f8-406a-a277-c6e61987fe35\" (UID: \"bcf27831-30f8-406a-a277-c6e61987fe35\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428398 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts\") pod \"646151e2-5537-4de8-a366-f2e2aa64a307\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrngb\" (UniqueName: \"kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb\") pod \"646151e2-5537-4de8-a366-f2e2aa64a307\" (UID: \"646151e2-5537-4de8-a366-f2e2aa64a307\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428446 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcf27831-30f8-406a-a277-c6e61987fe35" (UID: "bcf27831-30f8-406a-a277-c6e61987fe35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428823 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4034f818-c02e-451d-92ae-ebf4deb873ab" (UID: "4034f818-c02e-451d-92ae-ebf4deb873ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428832 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knzbg\" (UniqueName: \"kubernetes.io/projected/b80bdd05-0def-4f41-a14a-5ad83cd6428f-kube-api-access-knzbg\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428869 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf27831-30f8-406a-a277-c6e61987fe35-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.428881 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b80bdd05-0def-4f41-a14a-5ad83cd6428f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.429403 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7463b1e3-c90a-4525-a6d4-6d7892578aae" (UID: "7463b1e3-c90a-4525-a6d4-6d7892578aae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.429606 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "646151e2-5537-4de8-a366-f2e2aa64a307" (UID: "646151e2-5537-4de8-a366-f2e2aa64a307"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.429632 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88ee292f-3c7f-4131-8f57-682fe8679f15" (UID: "88ee292f-3c7f-4131-8f57-682fe8679f15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.429674 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be8ad371-835d-4087-b6c5-00576bc60ab8" (UID: "be8ad371-835d-4087-b6c5-00576bc60ab8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.429828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb9482b7-b9a0-4114-92d2-be2276447412" (UID: "eb9482b7-b9a0-4114-92d2-be2276447412"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.433640 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm" (OuterVolumeSpecName: "kube-api-access-dwbfm") pod "7463b1e3-c90a-4525-a6d4-6d7892578aae" (UID: "7463b1e3-c90a-4525-a6d4-6d7892578aae"). InnerVolumeSpecName "kube-api-access-dwbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.439219 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8" (OuterVolumeSpecName: "kube-api-access-vqns8") pod "be8ad371-835d-4087-b6c5-00576bc60ab8" (UID: "be8ad371-835d-4087-b6c5-00576bc60ab8"). InnerVolumeSpecName "kube-api-access-vqns8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.439345 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh" (OuterVolumeSpecName: "kube-api-access-jvnwh") pod "4034f818-c02e-451d-92ae-ebf4deb873ab" (UID: "4034f818-c02e-451d-92ae-ebf4deb873ab"). InnerVolumeSpecName "kube-api-access-jvnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.439489 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2" (OuterVolumeSpecName: "kube-api-access-s24x2") pod "bcf27831-30f8-406a-a277-c6e61987fe35" (UID: "bcf27831-30f8-406a-a277-c6e61987fe35"). InnerVolumeSpecName "kube-api-access-s24x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.439756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb" (OuterVolumeSpecName: "kube-api-access-nrngb") pod "646151e2-5537-4de8-a366-f2e2aa64a307" (UID: "646151e2-5537-4de8-a366-f2e2aa64a307"). InnerVolumeSpecName "kube-api-access-nrngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.440292 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7" (OuterVolumeSpecName: "kube-api-access-d82g7") pod "88ee292f-3c7f-4131-8f57-682fe8679f15" (UID: "88ee292f-3c7f-4131-8f57-682fe8679f15"). InnerVolumeSpecName "kube-api-access-d82g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.445113 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4" (OuterVolumeSpecName: "kube-api-access-nlct4") pod "eb9482b7-b9a0-4114-92d2-be2276447412" (UID: "eb9482b7-b9a0-4114-92d2-be2276447412"). InnerVolumeSpecName "kube-api-access-nlct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.490372 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sjs8x" event={"ID":"2b77bea6-4e1c-42d4-a33c-da52abd756a6","Type":"ContainerStarted","Data":"77c29ffe59fe1d03ac0877a65d3075a6c761bd42cdb9f6b2c2e8787086faa429"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.495218 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4a27-account-create-update-ljhjm" event={"ID":"88ee292f-3c7f-4131-8f57-682fe8679f15","Type":"ContainerDied","Data":"8175b99f1cfb2dac5b77951aea7a1927b7c29bfbfac7485693eb14050fdfd9eb"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.495248 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8175b99f1cfb2dac5b77951aea7a1927b7c29bfbfac7485693eb14050fdfd9eb" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.495289 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4a27-account-create-update-ljhjm" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.498192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x996w" event={"ID":"b80bdd05-0def-4f41-a14a-5ad83cd6428f","Type":"ContainerDied","Data":"c11971621e5508a4a40eb4ac57506c8c0cd457350ed7b90d0b363d9a54411289"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.498216 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11971621e5508a4a40eb4ac57506c8c0cd457350ed7b90d0b363d9a54411289" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.498281 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x996w" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.499806 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-989d-account-create-update-5x2fg" event={"ID":"be8ad371-835d-4087-b6c5-00576bc60ab8","Type":"ContainerDied","Data":"a85e0f81a2e9bf479c2bbfc12c1942cff28d90cc8a733dfa5c561ceb498329ff"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.499826 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85e0f81a2e9bf479c2bbfc12c1942cff28d90cc8a733dfa5c561ceb498329ff" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.499826 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-989d-account-create-update-5x2fg" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.501196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r7rzq" event={"ID":"4034f818-c02e-451d-92ae-ebf4deb873ab","Type":"ContainerDied","Data":"56b1eb2a3cc30753977706ff1c08bef7fa7a26504cb7cedf7ac72ca7ca23defa"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.501224 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b1eb2a3cc30753977706ff1c08bef7fa7a26504cb7cedf7ac72ca7ca23defa" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.501232 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r7rzq" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.502723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6hn4j" event={"ID":"eb9482b7-b9a0-4114-92d2-be2276447412","Type":"ContainerDied","Data":"5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.502748 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dec81c4bb60431e441ec90f00b49cb8088fdfe4a4dec9343c08f8d99da40389" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.502810 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6hn4j" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.509835 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sjs8x" podStartSLOduration=2.280416276 podStartE2EDuration="7.50981724s" podCreationTimestamp="2026-02-16 21:57:50 +0000 UTC" firstStartedPulling="2026-02-16 21:57:51.74861962 +0000 UTC m=+1204.401898511" lastFinishedPulling="2026-02-16 21:57:56.978020584 +0000 UTC m=+1209.631299475" observedRunningTime="2026-02-16 21:57:57.507390755 +0000 UTC m=+1210.160669666" watchObservedRunningTime="2026-02-16 21:57:57.50981724 +0000 UTC m=+1210.163096131" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.510045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f4de-account-create-update-n2r2d" event={"ID":"7463b1e3-c90a-4525-a6d4-6d7892578aae","Type":"ContainerDied","Data":"02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.510080 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a72f17fd9014568855e68cbc1d7e398ba466c20854bc0bdb31399b7ec36cab" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.510128 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f4de-account-create-update-n2r2d" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.515380 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6a4-account-create-update-tnht4" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.515404 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6a4-account-create-update-tnht4" event={"ID":"bcf27831-30f8-406a-a277-c6e61987fe35","Type":"ContainerDied","Data":"a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.515436 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0647d9279328d583c838546346bdb76557dde4a60e2ecf6d586b7ef58f55c76" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.518679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" event={"ID":"2df2814e-70ee-40f3-9efe-4d7cfe16bd38","Type":"ContainerDied","Data":"0754b3b9a09df825a1968a86d8b1158f158b8068dadb70cd0544749240c858d0"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.518740 4792 scope.go:117] "RemoveContainer" containerID="29d0cbe4aa297ca43eb6c9e7c7a2320129194b4520513b5b44bef2167689fabe" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.518905 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-qfzrg" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.521421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p6tpn" event={"ID":"646151e2-5537-4de8-a366-f2e2aa64a307","Type":"ContainerDied","Data":"22553ed26e8a18c67a03bbcdfb2b7c978010c8135899f34647e6b6190b36f153"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.521510 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22553ed26e8a18c67a03bbcdfb2b7c978010c8135899f34647e6b6190b36f153" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.521564 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p6tpn" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.525945 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerStarted","Data":"d56215d1a2b1e40fa0952a5075f803d094b76e76e627846e2af0437204a9e9f1"} Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530085 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc\") pod \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb\") pod \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530258 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb\") pod \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530301 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config\") pod \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530366 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-579g4\" (UniqueName: \"kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4\") pod \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\" (UID: \"2df2814e-70ee-40f3-9efe-4d7cfe16bd38\") " Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530768 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7463b1e3-c90a-4525-a6d4-6d7892578aae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530779 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s24x2\" (UniqueName: \"kubernetes.io/projected/bcf27831-30f8-406a-a277-c6e61987fe35-kube-api-access-s24x2\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530788 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646151e2-5537-4de8-a366-f2e2aa64a307-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530797 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrngb\" (UniqueName: \"kubernetes.io/projected/646151e2-5537-4de8-a366-f2e2aa64a307-kube-api-access-nrngb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530808 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4034f818-c02e-451d-92ae-ebf4deb873ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530818 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbfm\" (UniqueName: \"kubernetes.io/projected/7463b1e3-c90a-4525-a6d4-6d7892578aae-kube-api-access-dwbfm\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530828 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvnwh\" (UniqueName: \"kubernetes.io/projected/4034f818-c02e-451d-92ae-ebf4deb873ab-kube-api-access-jvnwh\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530836 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9482b7-b9a0-4114-92d2-be2276447412-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530845 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d82g7\" (UniqueName: \"kubernetes.io/projected/88ee292f-3c7f-4131-8f57-682fe8679f15-kube-api-access-d82g7\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530856 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be8ad371-835d-4087-b6c5-00576bc60ab8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530864 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqns8\" (UniqueName: \"kubernetes.io/projected/be8ad371-835d-4087-b6c5-00576bc60ab8-kube-api-access-vqns8\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530872 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ee292f-3c7f-4131-8f57-682fe8679f15-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.530881 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlct4\" (UniqueName: \"kubernetes.io/projected/eb9482b7-b9a0-4114-92d2-be2276447412-kube-api-access-nlct4\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.546779 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4" (OuterVolumeSpecName: "kube-api-access-579g4") pod "2df2814e-70ee-40f3-9efe-4d7cfe16bd38" (UID: "2df2814e-70ee-40f3-9efe-4d7cfe16bd38"). InnerVolumeSpecName "kube-api-access-579g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.559765 4792 scope.go:117] "RemoveContainer" containerID="6f50e69d981c64890ffe2307a59b5a9917bec7db8f9b894772a3feff3f57cfc1" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.586523 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2df2814e-70ee-40f3-9efe-4d7cfe16bd38" (UID: "2df2814e-70ee-40f3-9efe-4d7cfe16bd38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.597317 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2df2814e-70ee-40f3-9efe-4d7cfe16bd38" (UID: "2df2814e-70ee-40f3-9efe-4d7cfe16bd38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.597992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config" (OuterVolumeSpecName: "config") pod "2df2814e-70ee-40f3-9efe-4d7cfe16bd38" (UID: "2df2814e-70ee-40f3-9efe-4d7cfe16bd38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.598488 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2df2814e-70ee-40f3-9efe-4d7cfe16bd38" (UID: "2df2814e-70ee-40f3-9efe-4d7cfe16bd38"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.633216 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.633621 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.633737 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-579g4\" (UniqueName: \"kubernetes.io/projected/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-kube-api-access-579g4\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.633885 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.634059 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2df2814e-70ee-40f3-9efe-4d7cfe16bd38-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.858732 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:57:57 crc kubenswrapper[4792]: I0216 21:57:57.869655 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-qfzrg"] Feb 16 21:57:58 crc kubenswrapper[4792]: I0216 21:57:58.042508 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" path="/var/lib/kubelet/pods/2df2814e-70ee-40f3-9efe-4d7cfe16bd38/volumes" Feb 16 21:58:00 crc kubenswrapper[4792]: I0216 21:58:00.574143 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b77bea6-4e1c-42d4-a33c-da52abd756a6" containerID="77c29ffe59fe1d03ac0877a65d3075a6c761bd42cdb9f6b2c2e8787086faa429" exitCode=0 Feb 16 21:58:00 crc kubenswrapper[4792]: I0216 21:58:00.574198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sjs8x" event={"ID":"2b77bea6-4e1c-42d4-a33c-da52abd756a6","Type":"ContainerDied","Data":"77c29ffe59fe1d03ac0877a65d3075a6c761bd42cdb9f6b2c2e8787086faa429"} Feb 16 21:58:01 crc kubenswrapper[4792]: I0216 21:58:01.610251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerStarted","Data":"765645328cddc576b033d8a9d72f6e565c9a86d85a5bbdef70f47fc5a26ece7e"} Feb 16 21:58:01 crc kubenswrapper[4792]: I0216 21:58:01.612790 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ee2931a-9b3b-4568-b83b-9846e6f9c65a","Type":"ContainerStarted","Data":"93a9e084d0951f5f50803eef7badaaf431c3bf0b3731d2e27a00ad0bd9627cea"} Feb 16 21:58:01 crc kubenswrapper[4792]: I0216 21:58:01.652412 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.652392758 podStartE2EDuration="16.652392758s" podCreationTimestamp="2026-02-16 21:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:01.64023785 +0000 UTC m=+1214.293516741" watchObservedRunningTime="2026-02-16 21:58:01.652392758 +0000 UTC m=+1214.305671649" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.054076 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.135523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86hgl\" (UniqueName: \"kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl\") pod \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.135922 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data\") pod \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.136018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle\") pod \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\" (UID: \"2b77bea6-4e1c-42d4-a33c-da52abd756a6\") " Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.143741 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl" (OuterVolumeSpecName: "kube-api-access-86hgl") pod "2b77bea6-4e1c-42d4-a33c-da52abd756a6" (UID: "2b77bea6-4e1c-42d4-a33c-da52abd756a6"). InnerVolumeSpecName "kube-api-access-86hgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.179708 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b77bea6-4e1c-42d4-a33c-da52abd756a6" (UID: "2b77bea6-4e1c-42d4-a33c-da52abd756a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.200732 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data" (OuterVolumeSpecName: "config-data") pod "2b77bea6-4e1c-42d4-a33c-da52abd756a6" (UID: "2b77bea6-4e1c-42d4-a33c-da52abd756a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.239500 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86hgl\" (UniqueName: \"kubernetes.io/projected/2b77bea6-4e1c-42d4-a33c-da52abd756a6-kube-api-access-86hgl\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.239537 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.239548 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b77bea6-4e1c-42d4-a33c-da52abd756a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.636329 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sjs8x" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.638817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sjs8x" event={"ID":"2b77bea6-4e1c-42d4-a33c-da52abd756a6","Type":"ContainerDied","Data":"d22ece39bc3316fb1405eff61b689d58eeb06a8e60da5e05981ba721e20d7afb"} Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.638869 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22ece39bc3316fb1405eff61b689d58eeb06a8e60da5e05981ba721e20d7afb" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.922799 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923228 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80bdd05-0def-4f41-a14a-5ad83cd6428f" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923245 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80bdd05-0def-4f41-a14a-5ad83cd6428f" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923262 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b77bea6-4e1c-42d4-a33c-da52abd756a6" containerName="keystone-db-sync" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923270 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b77bea6-4e1c-42d4-a33c-da52abd756a6" containerName="keystone-db-sync" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923564 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646151e2-5537-4de8-a366-f2e2aa64a307" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923578 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="646151e2-5537-4de8-a366-f2e2aa64a307" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923588 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8ad371-835d-4087-b6c5-00576bc60ab8" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923664 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8ad371-835d-4087-b6c5-00576bc60ab8" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923675 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="init" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923681 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="init" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923696 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4034f818-c02e-451d-92ae-ebf4deb873ab" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923702 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034f818-c02e-451d-92ae-ebf4deb873ab" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923724 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="dnsmasq-dns" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923729 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="dnsmasq-dns" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923737 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7463b1e3-c90a-4525-a6d4-6d7892578aae" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923744 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7463b1e3-c90a-4525-a6d4-6d7892578aae" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923751 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ee292f-3c7f-4131-8f57-682fe8679f15" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923757 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ee292f-3c7f-4131-8f57-682fe8679f15" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923767 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf27831-30f8-406a-a277-c6e61987fe35" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923773 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf27831-30f8-406a-a277-c6e61987fe35" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: E0216 21:58:02.923779 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9482b7-b9a0-4114-92d2-be2276447412" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923785 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9482b7-b9a0-4114-92d2-be2276447412" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923982 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4034f818-c02e-451d-92ae-ebf4deb873ab" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.923995 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="646151e2-5537-4de8-a366-f2e2aa64a307" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924006 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9482b7-b9a0-4114-92d2-be2276447412" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924023 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df2814e-70ee-40f3-9efe-4d7cfe16bd38" containerName="dnsmasq-dns" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924032 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8ad371-835d-4087-b6c5-00576bc60ab8" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924043 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ee292f-3c7f-4131-8f57-682fe8679f15" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924054 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b77bea6-4e1c-42d4-a33c-da52abd756a6" containerName="keystone-db-sync" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924072 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80bdd05-0def-4f41-a14a-5ad83cd6428f" containerName="mariadb-database-create" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924085 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7463b1e3-c90a-4525-a6d4-6d7892578aae" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.924096 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf27831-30f8-406a-a277-c6e61987fe35" containerName="mariadb-account-create-update" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.929180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.955283 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.970966 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cth4j"] Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.972449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.977544 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gjvkz" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.977765 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.977881 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.978372 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.978474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:58:02 crc kubenswrapper[4792]: I0216 21:58:02.990340 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cth4j"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064571 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064613 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-826mt\" (UniqueName: \"kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064647 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlffm\" (UniqueName: \"kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064685 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064809 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064839 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064868 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064899 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.064940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.065018 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.078370 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-njp9q"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.079773 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.088979 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.089164 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6kpj6" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.115901 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-njp9q"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.162850 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mg87r"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.164221 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166839 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166869 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.166972 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7ck\" (UniqueName: \"kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167025 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-826mt\" (UniqueName: \"kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167048 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlffm\" (UniqueName: \"kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167361 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.167423 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.170749 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.170936 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kfdl7" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.171034 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.171233 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.176344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.177565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.181668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.185437 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.186527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.190946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.192538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.198144 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.220830 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jvjtg"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.223034 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.225957 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlffm\" (UniqueName: \"kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.227203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hn26t" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.227463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys\") pod \"keystone-bootstrap-cth4j\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.227521 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.227799 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.251407 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-826mt\" (UniqueName: \"kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt\") pod \"dnsmasq-dns-6c9c9f998c-vm6z5\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.253502 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mg87r"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.300002 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.307054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.307331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.307381 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.307446 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m7ck\" (UniqueName: \"kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.344487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqczn\" (UniqueName: \"kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.344576 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.351742 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jvjtg"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.382624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.386633 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m7ck\" (UniqueName: \"kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.402327 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4qx2s"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.436423 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.442453 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrtnx" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.442715 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.451895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.459772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.459981 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.460205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.460344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8428q\" (UniqueName: \"kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.460436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.460525 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.461468 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.461604 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqczn\" (UniqueName: \"kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.463168 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.463229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.464444 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data\") pod \"heat-db-sync-njp9q\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.525261 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqczn\" (UniqueName: \"kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn\") pod \"neutron-db-sync-mg87r\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.549681 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.598043 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4qx2s"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599425 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599463 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599722 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97lwb\" (UniqueName: \"kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.599785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8428q\" (UniqueName: \"kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.620986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.625060 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.643367 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.644150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.646341 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.667396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8428q\" (UniqueName: \"kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q\") pod \"cinder-db-sync-jvjtg\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.669359 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7vsw9"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.670629 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.675581 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.675649 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mtqhl" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.675808 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.679526 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7vsw9"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.705021 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.705361 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.705414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97lwb\" (UniqueName: \"kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.717290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.717498 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.717541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.724525 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.740568 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97lwb\" (UniqueName: \"kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb\") pod \"barbican-db-sync-4qx2s\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.758879 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.761660 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.774838 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.775062 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.788365 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.813001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.813064 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.813088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.813270 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hp8\" (UniqueName: \"kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.813373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.838134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.838310 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.839801 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.841525 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.848440 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.889039 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.925757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.925830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hp8\" (UniqueName: \"kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.925861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.925997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926161 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz898\" (UniqueName: \"kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.926301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.928127 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.930034 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.930308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.930904 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:03 crc kubenswrapper[4792]: I0216 21:58:03.953400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hp8\" (UniqueName: \"kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8\") pod \"placement-db-sync-7vsw9\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.025015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.029902 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.029935 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz898\" (UniqueName: \"kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.029962 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.029989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030005 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030043 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24fvb\" (UniqueName: \"kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030130 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030194 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.030259 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.033910 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.036014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.037095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.042560 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.043114 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.044899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.063304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz898\" (UniqueName: \"kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898\") pod \"ceilometer-0\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.115101 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.115741 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.120931 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.124365 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.124733 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.124876 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.129112 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bcfqq" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.144289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.144465 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.144873 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.145467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.146413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.148233 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24fvb\" (UniqueName: \"kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.150452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.151857 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.155132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.157134 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.159065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.159390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.189574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24fvb\" (UniqueName: \"kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb\") pod \"dnsmasq-dns-57c957c4ff-b84fx\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.196249 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.231821 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.234008 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.244345 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.244557 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250393 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250468 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250489 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.250563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2wz\" (UniqueName: \"kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.254669 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.359957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360099 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bsqh\" (UniqueName: \"kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360297 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360398 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360429 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360491 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.360588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2wz\" (UniqueName: \"kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.363208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.364572 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.364739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.368887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.369071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.369273 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.369301 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fda07fe1d1b61a7ca2f0646c25157ff7862921af25dfa15dc58bc6fca46e142c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.369983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.408966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2wz\" (UniqueName: \"kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.416152 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cth4j"] Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.462731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.462797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bsqh\" (UniqueName: \"kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.462861 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.462904 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.462998 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.463033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.463121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.463168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.464279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.469246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.478844 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.478888 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ec818cdac5fc3207a3e7d919212a3c077b51c825579526e875ab6fe8a7327b5/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.484589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.490379 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.491437 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.493590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.507518 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bsqh\" (UniqueName: \"kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.512869 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.550437 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.613881 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.696070 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cth4j" event={"ID":"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10","Type":"ContainerStarted","Data":"a742a9fe23b0a054fe5063ca4379638fdb10f944051d83b749a64220e573fa47"} Feb 16 21:58:04 crc kubenswrapper[4792]: I0216 21:58:04.770864 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:05 crc kubenswrapper[4792]: W0216 21:58:05.052203 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d59609_2910_4114_98d4_0f5154b95b1b.slice/crio-c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8 WatchSource:0}: Error finding container c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8: Status 404 returned error can't find the container with id c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8 Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.058929 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-njp9q"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.080499 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.105005 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mg87r"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.119485 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4qx2s"] Feb 16 21:58:05 crc kubenswrapper[4792]: W0216 21:58:05.379107 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196b6e8b_8689_469d_a348_455b4b9b701a.slice/crio-4562af2acc0b9db4f47f913ff3b9f67c338c16dea1148b62851469108b7f9b7b WatchSource:0}: Error finding container 4562af2acc0b9db4f47f913ff3b9f67c338c16dea1148b62851469108b7f9b7b: Status 404 returned error can't find the container with id 4562af2acc0b9db4f47f913ff3b9f67c338c16dea1148b62851469108b7f9b7b Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.381072 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:05 crc kubenswrapper[4792]: W0216 21:58:05.390380 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6432216a_a549_4060_8369_b6a0d86f1ba2.slice/crio-3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba WatchSource:0}: Error finding container 3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba: Status 404 returned error can't find the container with id 3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.405380 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7vsw9"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.418260 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jvjtg"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.638002 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.712210 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.740966 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" event={"ID":"196b6e8b-8689-469d-a348-455b4b9b701a","Type":"ContainerStarted","Data":"4562af2acc0b9db4f47f913ff3b9f67c338c16dea1148b62851469108b7f9b7b"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.754201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jvjtg" event={"ID":"6432216a-a549-4060-8369-b6a0d86f1ba2","Type":"ContainerStarted","Data":"3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.769976 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.778678 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cth4j" event={"ID":"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10","Type":"ContainerStarted","Data":"d2b8bc3e0f5096593470ee6cb457091a4effafd8290fe14545303aa7648d35a7"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.793351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vsw9" event={"ID":"64774f1f-f141-4fad-a226-1ac6b3a93782","Type":"ContainerStarted","Data":"7b88936e59270b0dc9b3519f077091c2d8226d978e8262c2f4b6bbd45fc8bda4"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.829439 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-njp9q" event={"ID":"72d59609-2910-4114-98d4-0f5154b95b1b","Type":"ContainerStarted","Data":"c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.862721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerStarted","Data":"1831770f20826491de119ed39ddc11e2c9bd4cf81c41041097d055e4d764976f"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.905028 4792 generic.go:334] "Generic (PLEG): container finished" podID="cc7fc103-c868-4264-9a79-0da66b3dea32" containerID="62d17213724d27ba51bc28f7d4987dff2a46b6659f7293d04e83d9552f0d7268" exitCode=0 Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.905877 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" event={"ID":"cc7fc103-c868-4264-9a79-0da66b3dea32","Type":"ContainerDied","Data":"62d17213724d27ba51bc28f7d4987dff2a46b6659f7293d04e83d9552f0d7268"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.905907 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" event={"ID":"cc7fc103-c868-4264-9a79-0da66b3dea32","Type":"ContainerStarted","Data":"903cdcbfd3382ceb9038a786f62b988075d2d068dedd7b456779b7ced79775bd"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.924740 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.961380 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.962063 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cth4j" podStartSLOduration=3.9620456219999998 podStartE2EDuration="3.962045622s" podCreationTimestamp="2026-02-16 21:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:05.843439049 +0000 UTC m=+1218.496717940" watchObservedRunningTime="2026-02-16 21:58:05.962045622 +0000 UTC m=+1218.615324513" Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.967189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4qx2s" event={"ID":"92b62519-345c-4ed1-b2cc-63186693467d","Type":"ContainerStarted","Data":"ecc063d7c543f15ead781d36eefca69d72c6edccca0b15fb054a3e3adc40981c"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.971880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c5f22cf-dad1-40a7-a58f-038dba0c59f7","Type":"ContainerStarted","Data":"c805ad7b76556cbffd2b05c4d895fc60a79f77185286476c8682788a0baeeee6"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.974100 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mg87r" event={"ID":"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c","Type":"ContainerStarted","Data":"10c66b0ccfd225fa0795e614048cd558fe795172ef58fc81d5ab670419caea4c"} Feb 16 21:58:05 crc kubenswrapper[4792]: I0216 21:58:05.974144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mg87r" event={"ID":"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c","Type":"ContainerStarted","Data":"6ff77118d99dea50762dc3d028dab1e728dacfe290323d3b2ca896c427599797"} Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.002772 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.032454 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mg87r" podStartSLOduration=3.032436857 podStartE2EDuration="3.032436857s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:05.990427196 +0000 UTC m=+1218.643706087" watchObservedRunningTime="2026-02-16 21:58:06.032436857 +0000 UTC m=+1218.685715748" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.338340 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.504347 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-826mt\" (UniqueName: \"kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590129 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590178 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590247 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.590295 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb\") pod \"cc7fc103-c868-4264-9a79-0da66b3dea32\" (UID: \"cc7fc103-c868-4264-9a79-0da66b3dea32\") " Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.625335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt" (OuterVolumeSpecName: "kube-api-access-826mt") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "kube-api-access-826mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.634944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.636062 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config" (OuterVolumeSpecName: "config") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.647928 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.652908 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.664406 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc7fc103-c868-4264-9a79-0da66b3dea32" (UID: "cc7fc103-c868-4264-9a79-0da66b3dea32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692728 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692756 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692768 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-826mt\" (UniqueName: \"kubernetes.io/projected/cc7fc103-c868-4264-9a79-0da66b3dea32-kube-api-access-826mt\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692777 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692786 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:06 crc kubenswrapper[4792]: I0216 21:58:06.692793 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7fc103-c868-4264-9a79-0da66b3dea32-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.019840 4792 generic.go:334] "Generic (PLEG): container finished" podID="196b6e8b-8689-469d-a348-455b4b9b701a" containerID="4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa" exitCode=0 Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.019940 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" event={"ID":"196b6e8b-8689-469d-a348-455b4b9b701a","Type":"ContainerDied","Data":"4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa"} Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.024190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerStarted","Data":"a9f3fb0af808b5ee2861e9da4694b8121d83dfee1a4578ea8569386137e930b5"} Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.031407 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" event={"ID":"cc7fc103-c868-4264-9a79-0da66b3dea32","Type":"ContainerDied","Data":"903cdcbfd3382ceb9038a786f62b988075d2d068dedd7b456779b7ced79775bd"} Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.031477 4792 scope.go:117] "RemoveContainer" containerID="62d17213724d27ba51bc28f7d4987dff2a46b6659f7293d04e83d9552f0d7268" Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.031752 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-vm6z5" Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.132032 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:07 crc kubenswrapper[4792]: I0216 21:58:07.146074 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-vm6z5"] Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.063305 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7fc103-c868-4264-9a79-0da66b3dea32" path="/var/lib/kubelet/pods/cc7fc103-c868-4264-9a79-0da66b3dea32/volumes" Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.085897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" event={"ID":"196b6e8b-8689-469d-a348-455b4b9b701a","Type":"ContainerStarted","Data":"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea"} Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.086786 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.094059 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerStarted","Data":"85164a7db182fbee8a89b7ec390dec19f13eab1e703d59a11ef6e2292b1d9fa4"} Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.102538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c5f22cf-dad1-40a7-a58f-038dba0c59f7","Type":"ContainerStarted","Data":"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833"} Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.102664 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-log" containerID="cri-o://2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" gracePeriod=30 Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.102687 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-httpd" containerID="cri-o://e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" gracePeriod=30 Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.239346 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" podStartSLOduration=5.239327115 podStartE2EDuration="5.239327115s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:08.208192878 +0000 UTC m=+1220.861471769" watchObservedRunningTime="2026-02-16 21:58:08.239327115 +0000 UTC m=+1220.892606006" Feb 16 21:58:08 crc kubenswrapper[4792]: I0216 21:58:08.296295 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.296275609 podStartE2EDuration="5.296275609s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:08.25436479 +0000 UTC m=+1220.907643681" watchObservedRunningTime="2026-02-16 21:58:08.296275609 +0000 UTC m=+1220.949554500" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.035283 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.152948 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerStarted","Data":"d0fd77f6000972258fb9e5d5b85d4b98b160a93b9a9f8892db58a53a5db6bf4f"} Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.153088 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-log" containerID="cri-o://85164a7db182fbee8a89b7ec390dec19f13eab1e703d59a11ef6e2292b1d9fa4" gracePeriod=30 Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.153148 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-httpd" containerID="cri-o://d0fd77f6000972258fb9e5d5b85d4b98b160a93b9a9f8892db58a53a5db6bf4f" gracePeriod=30 Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.155581 4792 generic.go:334] "Generic (PLEG): container finished" podID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerID="2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" exitCode=143 Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.155633 4792 generic.go:334] "Generic (PLEG): container finished" podID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerID="e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" exitCode=143 Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.156694 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.157011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c5f22cf-dad1-40a7-a58f-038dba0c59f7","Type":"ContainerDied","Data":"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833"} Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.157050 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c5f22cf-dad1-40a7-a58f-038dba0c59f7","Type":"ContainerDied","Data":"c805ad7b76556cbffd2b05c4d895fc60a79f77185286476c8682788a0baeeee6"} Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.157064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c5f22cf-dad1-40a7-a58f-038dba0c59f7","Type":"ContainerDied","Data":"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018"} Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.157081 4792 scope.go:117] "RemoveContainer" containerID="e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc2wz\" (UniqueName: \"kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182388 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182417 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182471 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182553 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182620 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182729 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\" (UID: \"8c5f22cf-dad1-40a7-a58f-038dba0c59f7\") " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.182880 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs" (OuterVolumeSpecName: "logs") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.183268 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.188291 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.192885 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts" (OuterVolumeSpecName: "scripts") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.194363 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.194345804 podStartE2EDuration="6.194345804s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:09.189408482 +0000 UTC m=+1221.842687383" watchObservedRunningTime="2026-02-16 21:58:09.194345804 +0000 UTC m=+1221.847624695" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.234480 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz" (OuterVolumeSpecName: "kube-api-access-rc2wz") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "kube-api-access-rc2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.244926 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9" (OuterVolumeSpecName: "glance") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.270360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.285533 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.285572 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.285617 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") on node \"crc\" " Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.285635 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc2wz\" (UniqueName: \"kubernetes.io/projected/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-kube-api-access-rc2wz\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.285648 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.288796 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.308943 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data" (OuterVolumeSpecName: "config-data") pod "8c5f22cf-dad1-40a7-a58f-038dba0c59f7" (UID: "8c5f22cf-dad1-40a7-a58f-038dba0c59f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.360470 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.360716 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9") on node "crc" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.387301 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.387352 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.387371 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5f22cf-dad1-40a7-a58f-038dba0c59f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.563635 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.577551 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.606464 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:09 crc kubenswrapper[4792]: E0216 21:58:09.607247 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-httpd" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607268 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-httpd" Feb 16 21:58:09 crc kubenswrapper[4792]: E0216 21:58:09.607293 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7fc103-c868-4264-9a79-0da66b3dea32" containerName="init" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607299 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7fc103-c868-4264-9a79-0da66b3dea32" containerName="init" Feb 16 21:58:09 crc kubenswrapper[4792]: E0216 21:58:09.607313 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-log" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607319 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-log" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607565 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7fc103-c868-4264-9a79-0da66b3dea32" containerName="init" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607617 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-httpd" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.607636 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" containerName="glance-log" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.609579 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.615215 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.616841 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.618348 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.693841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.693913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.693952 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.693990 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxqc\" (UniqueName: \"kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.694062 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.694079 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.694112 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.694162 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795641 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxqc\" (UniqueName: \"kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.795761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.796048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.796307 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.798243 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.798269 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fda07fe1d1b61a7ca2f0646c25157ff7862921af25dfa15dc58bc6fca46e142c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.801285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.801616 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.811166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.811830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.812632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxqc\" (UniqueName: \"kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.851772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " pod="openstack/glance-default-external-api-0" Feb 16 21:58:09 crc kubenswrapper[4792]: I0216 21:58:09.940641 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.040864 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5f22cf-dad1-40a7-a58f-038dba0c59f7" path="/var/lib/kubelet/pods/8c5f22cf-dad1-40a7-a58f-038dba0c59f7/volumes" Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.168432 4792 generic.go:334] "Generic (PLEG): container finished" podID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerID="d0fd77f6000972258fb9e5d5b85d4b98b160a93b9a9f8892db58a53a5db6bf4f" exitCode=0 Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.168499 4792 generic.go:334] "Generic (PLEG): container finished" podID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerID="85164a7db182fbee8a89b7ec390dec19f13eab1e703d59a11ef6e2292b1d9fa4" exitCode=143 Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.168542 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerDied","Data":"d0fd77f6000972258fb9e5d5b85d4b98b160a93b9a9f8892db58a53a5db6bf4f"} Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.168569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerDied","Data":"85164a7db182fbee8a89b7ec390dec19f13eab1e703d59a11ef6e2292b1d9fa4"} Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.169492 4792 generic.go:334] "Generic (PLEG): container finished" podID="7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" containerID="d2b8bc3e0f5096593470ee6cb457091a4effafd8290fe14545303aa7648d35a7" exitCode=0 Feb 16 21:58:10 crc kubenswrapper[4792]: I0216 21:58:10.170507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cth4j" event={"ID":"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10","Type":"ContainerDied","Data":"d2b8bc3e0f5096593470ee6cb457091a4effafd8290fe14545303aa7648d35a7"} Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.853888 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.956121 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlffm\" (UniqueName: \"kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.956304 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.956404 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.956437 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.956474 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.957012 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle\") pod \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\" (UID: \"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10\") " Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.961771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.962362 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.963367 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm" (OuterVolumeSpecName: "kube-api-access-mlffm") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "kube-api-access-mlffm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.968887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts" (OuterVolumeSpecName: "scripts") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.989147 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data" (OuterVolumeSpecName: "config-data") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:11 crc kubenswrapper[4792]: I0216 21:58:11.992262 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" (UID: "7dc654c4-7d0a-4dfe-886b-bb07dc12cc10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062848 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062888 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062900 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062910 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062921 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.062932 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlffm\" (UniqueName: \"kubernetes.io/projected/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10-kube-api-access-mlffm\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.203079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cth4j" event={"ID":"7dc654c4-7d0a-4dfe-886b-bb07dc12cc10","Type":"ContainerDied","Data":"a742a9fe23b0a054fe5063ca4379638fdb10f944051d83b749a64220e573fa47"} Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.203351 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a742a9fe23b0a054fe5063ca4379638fdb10f944051d83b749a64220e573fa47" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.203155 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cth4j" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.274236 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cth4j"] Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.285174 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cth4j"] Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.370546 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jsrtw"] Feb 16 21:58:12 crc kubenswrapper[4792]: E0216 21:58:12.371083 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" containerName="keystone-bootstrap" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.371104 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" containerName="keystone-bootstrap" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.371452 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" containerName="keystone-bootstrap" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.372523 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.374379 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.374847 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.375065 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.375306 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gjvkz" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.376319 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.382872 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsrtw"] Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.478776 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k64f\" (UniqueName: \"kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.479210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.479238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.479281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.479407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.479482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.489786 4792 scope.go:117] "RemoveContainer" containerID="2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.581847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.581889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.581927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.581963 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.581988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.582427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k64f\" (UniqueName: \"kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.586000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.586289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.586289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.586804 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.591002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.598361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k64f\" (UniqueName: \"kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f\") pod \"keystone-bootstrap-jsrtw\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:12 crc kubenswrapper[4792]: I0216 21:58:12.707241 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:14 crc kubenswrapper[4792]: I0216 21:58:14.049231 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc654c4-7d0a-4dfe-886b-bb07dc12cc10" path="/var/lib/kubelet/pods/7dc654c4-7d0a-4dfe-886b-bb07dc12cc10/volumes" Feb 16 21:58:14 crc kubenswrapper[4792]: I0216 21:58:14.197821 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:14 crc kubenswrapper[4792]: I0216 21:58:14.262038 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:58:14 crc kubenswrapper[4792]: I0216 21:58:14.263063 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" containerID="cri-o://9c258bae3f1eae089555296a79d1bf8dc912bccf1afd5fb59fa3f995fea49a65" gracePeriod=10 Feb 16 21:58:15 crc kubenswrapper[4792]: I0216 21:58:15.258748 4792 generic.go:334] "Generic (PLEG): container finished" podID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerID="9c258bae3f1eae089555296a79d1bf8dc912bccf1afd5fb59fa3f995fea49a65" exitCode=0 Feb 16 21:58:15 crc kubenswrapper[4792]: I0216 21:58:15.258834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" event={"ID":"4d82eba4-4763-4dc0-a3f3-5236c0119764","Type":"ContainerDied","Data":"9c258bae3f1eae089555296a79d1bf8dc912bccf1afd5fb59fa3f995fea49a65"} Feb 16 21:58:15 crc kubenswrapper[4792]: I0216 21:58:15.769075 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 21:58:15 crc kubenswrapper[4792]: I0216 21:58:15.775541 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 21:58:16 crc kubenswrapper[4792]: I0216 21:58:16.277783 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 21:58:19 crc kubenswrapper[4792]: I0216 21:58:19.014918 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Feb 16 21:58:21 crc kubenswrapper[4792]: I0216 21:58:21.328788 4792 generic.go:334] "Generic (PLEG): container finished" podID="23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" containerID="10c66b0ccfd225fa0795e614048cd558fe795172ef58fc81d5ab670419caea4c" exitCode=0 Feb 16 21:58:21 crc kubenswrapper[4792]: I0216 21:58:21.328879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mg87r" event={"ID":"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c","Type":"ContainerDied","Data":"10c66b0ccfd225fa0795e614048cd558fe795172ef58fc81d5ab670419caea4c"} Feb 16 21:58:24 crc kubenswrapper[4792]: I0216 21:58:24.014319 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.756033 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.869860 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.869925 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.869960 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.869976 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.870203 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.870229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bsqh\" (UniqueName: \"kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.870271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.870364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run\") pod \"e0b195c4-7dac-4393-be5a-045dc1af6481\" (UID: \"e0b195c4-7dac-4393-be5a-045dc1af6481\") " Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.870948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs" (OuterVolumeSpecName: "logs") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.871200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.871353 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.871369 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0b195c4-7dac-4393-be5a-045dc1af6481-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.914766 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts" (OuterVolumeSpecName: "scripts") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:27 crc kubenswrapper[4792]: I0216 21:58:27.929351 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh" (OuterVolumeSpecName: "kube-api-access-9bsqh") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "kube-api-access-9bsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.021899 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.022215 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bsqh\" (UniqueName: \"kubernetes.io/projected/e0b195c4-7dac-4393-be5a-045dc1af6481-kube-api-access-9bsqh\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.104928 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.105247 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.128633 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.128660 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.156580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2" (OuterVolumeSpecName: "glance") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.157167 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data" (OuterVolumeSpecName: "config-data") pod "e0b195c4-7dac-4393-be5a-045dc1af6481" (UID: "e0b195c4-7dac-4393-be5a-045dc1af6481"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.232075 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") on node \"crc\" " Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.232109 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b195c4-7dac-4393-be5a-045dc1af6481-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.263438 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.263581 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2") on node "crc" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.334356 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.381051 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.381261 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6m7ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-njp9q_openstack(72d59609-2910-4114-98d4-0f5154b95b1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.382440 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-njp9q" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.558421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0b195c4-7dac-4393-be5a-045dc1af6481","Type":"ContainerDied","Data":"a9f3fb0af808b5ee2861e9da4694b8121d83dfee1a4578ea8569386137e930b5"} Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.558506 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.563084 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-njp9q" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.687463 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.703072 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.714219 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.714756 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-log" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.714778 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-log" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.714818 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-httpd" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.714826 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-httpd" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.715032 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-httpd" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.715058 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" containerName="glance-log" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.716367 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.722617 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.723169 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.740572 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.745268 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.745351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.745891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.745957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nvn\" (UniqueName: \"kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.746082 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.746162 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.746249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.746306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847475 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nvn\" (UniqueName: \"kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847861 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.847893 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.849743 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.849955 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.851243 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.851269 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ec818cdac5fc3207a3e7d919212a3c077b51c825579526e875ab6fe8a7327b5/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.854737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.855215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.857800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.870016 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.874136 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nvn\" (UniqueName: \"kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.905691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.915838 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.916026 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97hcdhd7hdfh578h695h595h569h9h597h56ch696h8ch87h674h695h65fh4hb8h568hf4h8bh84h679h664h64hf7hb6h5d9h587h686hb5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz898,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fbad2630-a4ca-43fc-8c09-2c127888d3f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.928230 4792 scope.go:117] "RemoveContainer" containerID="e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.928846 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018\": container with ID starting with e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018 not found: ID does not exist" containerID="e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.928921 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018"} err="failed to get container status \"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018\": rpc error: code = NotFound desc = could not find container \"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018\": container with ID starting with e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018 not found: ID does not exist" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.928957 4792 scope.go:117] "RemoveContainer" containerID="2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" Feb 16 21:58:28 crc kubenswrapper[4792]: E0216 21:58:28.929663 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833\": container with ID starting with 2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833 not found: ID does not exist" containerID="2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.929694 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833"} err="failed to get container status \"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833\": rpc error: code = NotFound desc = could not find container \"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833\": container with ID starting with 2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833 not found: ID does not exist" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.929714 4792 scope.go:117] "RemoveContainer" containerID="e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.929982 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018"} err="failed to get container status \"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018\": rpc error: code = NotFound desc = could not find container \"e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018\": container with ID starting with e024940e3d5994301f2acc40158939bed788a3dbd6c2eac39838516ece6a7018 not found: ID does not exist" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.930009 4792 scope.go:117] "RemoveContainer" containerID="2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.930451 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833"} err="failed to get container status \"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833\": rpc error: code = NotFound desc = could not find container \"2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833\": container with ID starting with 2c514e9e28d558784114a93e37646c05f67d16a01fdf798e5b28538f3de80833 not found: ID does not exist" Feb 16 21:58:28 crc kubenswrapper[4792]: I0216 21:58:28.930504 4792 scope.go:117] "RemoveContainer" containerID="d0fd77f6000972258fb9e5d5b85d4b98b160a93b9a9f8892db58a53a5db6bf4f" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.042938 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.047507 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.152791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqczn\" (UniqueName: \"kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn\") pod \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.152919 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle\") pod \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.153028 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config\") pod \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\" (UID: \"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c\") " Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.157810 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn" (OuterVolumeSpecName: "kube-api-access-xqczn") pod "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" (UID: "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c"). InnerVolumeSpecName "kube-api-access-xqczn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.180890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config" (OuterVolumeSpecName: "config") pod "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" (UID: "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.186429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" (UID: "23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.269476 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.269872 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqczn\" (UniqueName: \"kubernetes.io/projected/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-kube-api-access-xqczn\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.270061 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.583567 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mg87r" event={"ID":"23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c","Type":"ContainerDied","Data":"6ff77118d99dea50762dc3d028dab1e728dacfe290323d3b2ca896c427599797"} Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.583952 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff77118d99dea50762dc3d028dab1e728dacfe290323d3b2ca896c427599797" Feb 16 21:58:29 crc kubenswrapper[4792]: I0216 21:58:29.583821 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mg87r" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.044246 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b195c4-7dac-4393-be5a-045dc1af6481" path="/var/lib/kubelet/pods/e0b195c4-7dac-4393-be5a-045dc1af6481/volumes" Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.305396 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.305856 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8428q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jvjtg_openstack(6432216a-a549-4060-8369-b6a0d86f1ba2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.305563 4792 scope.go:117] "RemoveContainer" containerID="85164a7db182fbee8a89b7ec390dec19f13eab1e703d59a11ef6e2292b1d9fa4" Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.307899 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jvjtg" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.344705 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.345331 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" containerName="neutron-db-sync" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.345350 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" containerName="neutron-db-sync" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.345635 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" containerName="neutron-db-sync" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.347122 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.386806 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.452015 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.484578 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.485033 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.485044 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.485058 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="init" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.485063 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="init" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.485257 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.486268 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501180 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501358 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kfdl7" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501506 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njsd\" (UniqueName: \"kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.501917 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.502020 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.507724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.526666 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.609109 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.609314 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.609505 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610366 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sj2s\" (UniqueName: \"kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s\") pod \"4d82eba4-4763-4dc0-a3f3-5236c0119764\" (UID: \"4d82eba4-4763-4dc0-a3f3-5236c0119764\") " Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610588 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610821 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610852 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.610868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611226 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611297 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njsd\" (UniqueName: \"kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611326 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qrmq\" (UniqueName: \"kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611378 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.611405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.617185 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.617548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.617783 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.619976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.622452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.636667 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" event={"ID":"4d82eba4-4763-4dc0-a3f3-5236c0119764","Type":"ContainerDied","Data":"171865ec70bc21797319d26d6b32af2c8d863379945315657212153bb01025c0"} Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.636761 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.639727 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s" (OuterVolumeSpecName: "kube-api-access-8sj2s") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "kube-api-access-8sj2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.643642 4792 scope.go:117] "RemoveContainer" containerID="9c258bae3f1eae089555296a79d1bf8dc912bccf1afd5fb59fa3f995fea49a65" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.653872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njsd\" (UniqueName: \"kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd\") pod \"dnsmasq-dns-5ccc5c4795-qqbl5\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: E0216 21:58:30.654800 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jvjtg" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714037 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qrmq\" (UniqueName: \"kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714462 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714848 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.714935 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sj2s\" (UniqueName: \"kubernetes.io/projected/4d82eba4-4763-4dc0-a3f3-5236c0119764-kube-api-access-8sj2s\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.719222 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.724265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.731414 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.739162 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.742229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qrmq\" (UniqueName: \"kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq\") pod \"neutron-5fc7bbfd9b-jkwk2\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.749589 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config" (OuterVolumeSpecName: "config") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.768311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.778703 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.805936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.819374 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.819854 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.821037 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.821064 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.821077 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.825326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.853736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d82eba4-4763-4dc0-a3f3-5236c0119764" (UID: "4d82eba4-4763-4dc0-a3f3-5236c0119764"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.896482 4792 scope.go:117] "RemoveContainer" containerID="f50ba233704645564d73dbb6705ac3cf134773655156b8bdb936a1e1316e2cf7" Feb 16 21:58:30 crc kubenswrapper[4792]: I0216 21:58:30.932765 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d82eba4-4763-4dc0-a3f3-5236c0119764-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.033937 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.052848 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-bjcg8"] Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.098876 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.150460 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsrtw"] Feb 16 21:58:31 crc kubenswrapper[4792]: W0216 21:58:31.166414 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f7c29a5_bb18_4493_99b4_63546d7bffc8.slice/crio-901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90 WatchSource:0}: Error finding container 901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90: Status 404 returned error can't find the container with id 901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90 Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.374269 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.540430 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.733716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" event={"ID":"0a4bbdfa-4451-4626-994d-1334856bd30f","Type":"ContainerStarted","Data":"9833badc3c249eee2715f14690b315748a4674132fb9e1f02b964aa8681b6387"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.747310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerStarted","Data":"d5567924ca97846b5fd833f82b6000e5062e393c405a7ba0996f30a5b3b9c88c"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.765453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsrtw" event={"ID":"4f7c29a5-bb18-4493-99b4-63546d7bffc8","Type":"ContainerStarted","Data":"0d97562f245edff7e667772debe2af2b3722ed6710e10c772c0d145d308f9bf8"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.765494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsrtw" event={"ID":"4f7c29a5-bb18-4493-99b4-63546d7bffc8","Type":"ContainerStarted","Data":"901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.782462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vsw9" event={"ID":"64774f1f-f141-4fad-a226-1ac6b3a93782","Type":"ContainerStarted","Data":"106c365e149408f83cdf4810688160480cb6b3a7fdd7e5a03c0cc9ff6385e9ef"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.811986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerStarted","Data":"42cdef44c36b584888bbd452100382563cd8a27f6bd837a0e48026e3083b9d62"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.826891 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jsrtw" podStartSLOduration=19.826858935 podStartE2EDuration="19.826858935s" podCreationTimestamp="2026-02-16 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:31.805642651 +0000 UTC m=+1244.458921542" watchObservedRunningTime="2026-02-16 21:58:31.826858935 +0000 UTC m=+1244.480137826" Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.827004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4qx2s" event={"ID":"92b62519-345c-4ed1-b2cc-63186693467d","Type":"ContainerStarted","Data":"b4de07b889d2b2b23a99c349151052c0b355c503d3dc19562cd48a2b5c241d21"} Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.880485 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7vsw9" podStartSLOduration=5.336877344 podStartE2EDuration="28.880305791s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:58:05.390770194 +0000 UTC m=+1218.044049085" lastFinishedPulling="2026-02-16 21:58:28.934198641 +0000 UTC m=+1241.587477532" observedRunningTime="2026-02-16 21:58:31.822695812 +0000 UTC m=+1244.475974713" watchObservedRunningTime="2026-02-16 21:58:31.880305791 +0000 UTC m=+1244.533584692" Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.909042 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4qx2s" podStartSLOduration=5.114962057 podStartE2EDuration="28.909012297s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:58:05.108794962 +0000 UTC m=+1217.762073853" lastFinishedPulling="2026-02-16 21:58:28.902845202 +0000 UTC m=+1241.556124093" observedRunningTime="2026-02-16 21:58:31.85810503 +0000 UTC m=+1244.511383921" watchObservedRunningTime="2026-02-16 21:58:31.909012297 +0000 UTC m=+1244.562291188" Feb 16 21:58:31 crc kubenswrapper[4792]: I0216 21:58:31.940367 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.049015 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" path="/var/lib/kubelet/pods/4d82eba4-4763-4dc0-a3f3-5236c0119764/volumes" Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.846380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerStarted","Data":"91d8b01a9668051c525bff8feae20fb98fabb81992b60d2574e1f6824a51249a"} Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.846891 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerStarted","Data":"068bdaaa57629a6b3b04d1c0cb57a975ab928bf2548f0fc88971cfff99784e8e"} Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.850800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerStarted","Data":"292a62c4357341975de7a60cb9ce980634c1fc9a1bba2ed88e7873810d1bcf82"} Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.869475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerStarted","Data":"80619daf70af9937b66b9b66ae6d92131204ed3a4e1011364083e1b29c0da5c8"} Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.873754 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerID="29cde44eba16c61f0b26b84931e1461db7bf00f1c6c1a6929cdd17fa46c13172" exitCode=0 Feb 16 21:58:32 crc kubenswrapper[4792]: I0216 21:58:32.873894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" event={"ID":"0a4bbdfa-4451-4626-994d-1334856bd30f","Type":"ContainerDied","Data":"29cde44eba16c61f0b26b84931e1461db7bf00f1c6c1a6929cdd17fa46c13172"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.042660 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.044827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.047934 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.048240 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.060034 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.136850 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.136957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.137054 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.137129 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.137233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.137506 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7j58\" (UniqueName: \"kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.137873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245264 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245618 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245646 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245683 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245705 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.245791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7j58\" (UniqueName: \"kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.255946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.256363 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.256804 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.257126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.266794 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.269457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7j58\" (UniqueName: \"kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.275288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs\") pod \"neutron-7686fdb8c5-qzv2j\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.384005 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.889237 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerStarted","Data":"b502d9d3e57eb08d08035cf2fdac8cc8c7c7d30a9921b5fa533d216034a1a605"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.895114 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerStarted","Data":"2b3b19200f4b032f8178f3c40fbfe90c01154988f221c2db38d6aa55f60c917f"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.899239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" event={"ID":"0a4bbdfa-4451-4626-994d-1334856bd30f","Type":"ContainerStarted","Data":"42a2913e9ff4076b6bdc79ed2870d4c4983f7b4d79f23ec882385d293aae48f8"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.899378 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.901861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerStarted","Data":"dd2f06bb4ed8aad609227120c023177d23fdf403e5fec286afed063cd65345e4"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.902070 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.903734 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerStarted","Data":"f751dc4120e69b078dffc2224f8e0b13cefeeca2f0e9ad23bf9cd001474ebe18"} Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.923543 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.923493601 podStartE2EDuration="24.923493601s" podCreationTimestamp="2026-02-16 21:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:33.920181322 +0000 UTC m=+1246.573460233" watchObservedRunningTime="2026-02-16 21:58:33.923493601 +0000 UTC m=+1246.576772492" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.966848 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fc7bbfd9b-jkwk2" podStartSLOduration=3.966832234 podStartE2EDuration="3.966832234s" podCreationTimestamp="2026-02-16 21:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:33.956936006 +0000 UTC m=+1246.610214897" watchObservedRunningTime="2026-02-16 21:58:33.966832234 +0000 UTC m=+1246.620111125" Feb 16 21:58:33 crc kubenswrapper[4792]: I0216 21:58:33.978863 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" podStartSLOduration=3.978845029 podStartE2EDuration="3.978845029s" podCreationTimestamp="2026-02-16 21:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:33.973387371 +0000 UTC m=+1246.626666262" watchObservedRunningTime="2026-02-16 21:58:33.978845029 +0000 UTC m=+1246.632123920" Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.006040 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.006016604 podStartE2EDuration="6.006016604s" podCreationTimestamp="2026-02-16 21:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:33.997340719 +0000 UTC m=+1246.650619610" watchObservedRunningTime="2026-02-16 21:58:34.006016604 +0000 UTC m=+1246.659295485" Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.015533 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-bjcg8" podUID="4d82eba4-4763-4dc0-a3f3-5236c0119764" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.055856 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.914474 4792 generic.go:334] "Generic (PLEG): container finished" podID="64774f1f-f141-4fad-a226-1ac6b3a93782" containerID="106c365e149408f83cdf4810688160480cb6b3a7fdd7e5a03c0cc9ff6385e9ef" exitCode=0 Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.914536 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vsw9" event={"ID":"64774f1f-f141-4fad-a226-1ac6b3a93782","Type":"ContainerDied","Data":"106c365e149408f83cdf4810688160480cb6b3a7fdd7e5a03c0cc9ff6385e9ef"} Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.919144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerStarted","Data":"f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b"} Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.919187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerStarted","Data":"7b5c510268f2f3057462dc91616df0420871dfb753c6a50bad8fb3ec29ce3bc2"} Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.919199 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerStarted","Data":"a44927403482cafc74f9be989c9f26c4c70610b518983290d3ed85e07dc7610e"} Feb 16 21:58:34 crc kubenswrapper[4792]: I0216 21:58:34.956643 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7686fdb8c5-qzv2j" podStartSLOduration=2.956622893 podStartE2EDuration="2.956622893s" podCreationTimestamp="2026-02-16 21:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:34.949910122 +0000 UTC m=+1247.603189133" watchObservedRunningTime="2026-02-16 21:58:34.956622893 +0000 UTC m=+1247.609901784" Feb 16 21:58:35 crc kubenswrapper[4792]: I0216 21:58:35.932264 4792 generic.go:334] "Generic (PLEG): container finished" podID="4f7c29a5-bb18-4493-99b4-63546d7bffc8" containerID="0d97562f245edff7e667772debe2af2b3722ed6710e10c772c0d145d308f9bf8" exitCode=0 Feb 16 21:58:35 crc kubenswrapper[4792]: I0216 21:58:35.932332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsrtw" event={"ID":"4f7c29a5-bb18-4493-99b4-63546d7bffc8","Type":"ContainerDied","Data":"0d97562f245edff7e667772debe2af2b3722ed6710e10c772c0d145d308f9bf8"} Feb 16 21:58:35 crc kubenswrapper[4792]: I0216 21:58:35.939537 4792 generic.go:334] "Generic (PLEG): container finished" podID="92b62519-345c-4ed1-b2cc-63186693467d" containerID="b4de07b889d2b2b23a99c349151052c0b355c503d3dc19562cd48a2b5c241d21" exitCode=0 Feb 16 21:58:35 crc kubenswrapper[4792]: I0216 21:58:35.940901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4qx2s" event={"ID":"92b62519-345c-4ed1-b2cc-63186693467d","Type":"ContainerDied","Data":"b4de07b889d2b2b23a99c349151052c0b355c503d3dc19562cd48a2b5c241d21"} Feb 16 21:58:35 crc kubenswrapper[4792]: I0216 21:58:35.940944 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:58:36 crc kubenswrapper[4792]: I0216 21:58:36.845512 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:36 crc kubenswrapper[4792]: I0216 21:58:36.955140 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7vsw9" event={"ID":"64774f1f-f141-4fad-a226-1ac6b3a93782","Type":"ContainerDied","Data":"7b88936e59270b0dc9b3519f077091c2d8226d978e8262c2f4b6bbd45fc8bda4"} Feb 16 21:58:36 crc kubenswrapper[4792]: I0216 21:58:36.955207 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b88936e59270b0dc9b3519f077091c2d8226d978e8262c2f4b6bbd45fc8bda4" Feb 16 21:58:36 crc kubenswrapper[4792]: I0216 21:58:36.955240 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7vsw9" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.041153 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts\") pod \"64774f1f-f141-4fad-a226-1ac6b3a93782\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.041250 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle\") pod \"64774f1f-f141-4fad-a226-1ac6b3a93782\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.041279 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5hp8\" (UniqueName: \"kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8\") pod \"64774f1f-f141-4fad-a226-1ac6b3a93782\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.041328 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data\") pod \"64774f1f-f141-4fad-a226-1ac6b3a93782\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.041424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs\") pod \"64774f1f-f141-4fad-a226-1ac6b3a93782\" (UID: \"64774f1f-f141-4fad-a226-1ac6b3a93782\") " Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.042126 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs" (OuterVolumeSpecName: "logs") pod "64774f1f-f141-4fad-a226-1ac6b3a93782" (UID: "64774f1f-f141-4fad-a226-1ac6b3a93782"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.087645 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8" (OuterVolumeSpecName: "kube-api-access-c5hp8") pod "64774f1f-f141-4fad-a226-1ac6b3a93782" (UID: "64774f1f-f141-4fad-a226-1ac6b3a93782"). InnerVolumeSpecName "kube-api-access-c5hp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.094795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts" (OuterVolumeSpecName: "scripts") pod "64774f1f-f141-4fad-a226-1ac6b3a93782" (UID: "64774f1f-f141-4fad-a226-1ac6b3a93782"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.100093 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.100383 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data" (OuterVolumeSpecName: "config-data") pod "64774f1f-f141-4fad-a226-1ac6b3a93782" (UID: "64774f1f-f141-4fad-a226-1ac6b3a93782"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:37 crc kubenswrapper[4792]: E0216 21:58:37.100545 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64774f1f-f141-4fad-a226-1ac6b3a93782" containerName="placement-db-sync" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.100562 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="64774f1f-f141-4fad-a226-1ac6b3a93782" containerName="placement-db-sync" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.100767 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="64774f1f-f141-4fad-a226-1ac6b3a93782" containerName="placement-db-sync" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.101808 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.105607 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.105815 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.139785 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.144030 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.144057 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5hp8\" (UniqueName: \"kubernetes.io/projected/64774f1f-f141-4fad-a226-1ac6b3a93782-kube-api-access-c5hp8\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.144068 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.144076 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64774f1f-f141-4fad-a226-1ac6b3a93782-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.180751 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64774f1f-f141-4fad-a226-1ac6b3a93782" (UID: "64774f1f-f141-4fad-a226-1ac6b3a93782"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.247019 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkjj\" (UniqueName: \"kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.247407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.247481 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.247500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.248337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.248491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.248553 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.249697 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64774f1f-f141-4fad-a226-1ac6b3a93782-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352486 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352632 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352729 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkjj\" (UniqueName: \"kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.352840 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.354096 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.359268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.362246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.362885 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.363270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.363447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.382100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkjj\" (UniqueName: \"kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj\") pod \"placement-56979bc86d-lb4lw\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:37 crc kubenswrapper[4792]: I0216 21:58:37.546905 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.043406 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.044747 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.079860 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.113096 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.941433 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.941486 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.941501 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.941513 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.972170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.982586 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.982720 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:39 crc kubenswrapper[4792]: I0216 21:58:39.993906 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.727130 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.742416 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.784558 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848564 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k64f\" (UniqueName: \"kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848743 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97lwb\" (UniqueName: \"kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb\") pod \"92b62519-345c-4ed1-b2cc-63186693467d\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848781 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848826 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle\") pod \"92b62519-345c-4ed1-b2cc-63186693467d\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848928 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.848990 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys\") pod \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\" (UID: \"4f7c29a5-bb18-4493-99b4-63546d7bffc8\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.849034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data\") pod \"92b62519-345c-4ed1-b2cc-63186693467d\" (UID: \"92b62519-345c-4ed1-b2cc-63186693467d\") " Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.856032 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "92b62519-345c-4ed1-b2cc-63186693467d" (UID: "92b62519-345c-4ed1-b2cc-63186693467d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.857321 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb" (OuterVolumeSpecName: "kube-api-access-97lwb") pod "92b62519-345c-4ed1-b2cc-63186693467d" (UID: "92b62519-345c-4ed1-b2cc-63186693467d"). InnerVolumeSpecName "kube-api-access-97lwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.862281 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.862509 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="dnsmasq-dns" containerID="cri-o://4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea" gracePeriod=10 Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.868007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.869164 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts" (OuterVolumeSpecName: "scripts") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.876141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f" (OuterVolumeSpecName: "kube-api-access-7k64f") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "kube-api-access-7k64f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.885747 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.947944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951114 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951210 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951222 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k64f\" (UniqueName: \"kubernetes.io/projected/4f7c29a5-bb18-4493-99b4-63546d7bffc8-kube-api-access-7k64f\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951232 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951241 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951249 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97lwb\" (UniqueName: \"kubernetes.io/projected/92b62519-345c-4ed1-b2cc-63186693467d-kube-api-access-97lwb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.951279 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:40 crc kubenswrapper[4792]: I0216 21:58:40.972829 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data" (OuterVolumeSpecName: "config-data") pod "4f7c29a5-bb18-4493-99b4-63546d7bffc8" (UID: "4f7c29a5-bb18-4493-99b4-63546d7bffc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.001058 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.018072 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4qx2s" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.018080 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4qx2s" event={"ID":"92b62519-345c-4ed1-b2cc-63186693467d","Type":"ContainerDied","Data":"ecc063d7c543f15ead781d36eefca69d72c6edccca0b15fb054a3e3adc40981c"} Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.019071 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc063d7c543f15ead781d36eefca69d72c6edccca0b15fb054a3e3adc40981c" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.025804 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92b62519-345c-4ed1-b2cc-63186693467d" (UID: "92b62519-345c-4ed1-b2cc-63186693467d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.050485 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerStarted","Data":"11cdc2bac82de5e912425bbdf0e165de3601044d447be4c97d6aef3d7abd1a74"} Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.052752 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7c29a5-bb18-4493-99b4-63546d7bffc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.052862 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b62519-345c-4ed1-b2cc-63186693467d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.065076 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsrtw" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.065834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsrtw" event={"ID":"4f7c29a5-bb18-4493-99b4-63546d7bffc8","Type":"ContainerDied","Data":"901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90"} Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.065936 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901d29422cbeca47d245155eb31af1d5bbda859e427a92a6f5903da278fded90" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.560852 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667228 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667284 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24fvb\" (UniqueName: \"kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667320 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667552 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.667743 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config\") pod \"196b6e8b-8689-469d-a348-455b4b9b701a\" (UID: \"196b6e8b-8689-469d-a348-455b4b9b701a\") " Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.693021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb" (OuterVolumeSpecName: "kube-api-access-24fvb") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "kube-api-access-24fvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.738218 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.741522 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.748094 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.760512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config" (OuterVolumeSpecName: "config") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.773028 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.773065 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.773077 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24fvb\" (UniqueName: \"kubernetes.io/projected/196b6e8b-8689-469d-a348-455b4b9b701a-kube-api-access-24fvb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.773088 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.773097 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.787104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "196b6e8b-8689-469d-a348-455b4b9b701a" (UID: "196b6e8b-8689-469d-a348-455b4b9b701a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.875436 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/196b6e8b-8689-469d-a348-455b4b9b701a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914087 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5978f67fb4-lxqn8"] Feb 16 21:58:41 crc kubenswrapper[4792]: E0216 21:58:41.914564 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b62519-345c-4ed1-b2cc-63186693467d" containerName="barbican-db-sync" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914581 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b62519-345c-4ed1-b2cc-63186693467d" containerName="barbican-db-sync" Feb 16 21:58:41 crc kubenswrapper[4792]: E0216 21:58:41.914619 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="init" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914627 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="init" Feb 16 21:58:41 crc kubenswrapper[4792]: E0216 21:58:41.914653 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7c29a5-bb18-4493-99b4-63546d7bffc8" containerName="keystone-bootstrap" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914659 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7c29a5-bb18-4493-99b4-63546d7bffc8" containerName="keystone-bootstrap" Feb 16 21:58:41 crc kubenswrapper[4792]: E0216 21:58:41.914675 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="dnsmasq-dns" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914681 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="dnsmasq-dns" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914878 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" containerName="dnsmasq-dns" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914897 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b62519-345c-4ed1-b2cc-63186693467d" containerName="barbican-db-sync" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.914906 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7c29a5-bb18-4493-99b4-63546d7bffc8" containerName="keystone-bootstrap" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.915622 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.919903 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.920075 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.920297 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.920411 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gjvkz" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.920566 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.923417 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.938051 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5978f67fb4-lxqn8"] Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977356 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-internal-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977432 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-scripts\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-config-data\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-credential-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977581 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-fernet-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9t9\" (UniqueName: \"kubernetes.io/projected/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-kube-api-access-dg9t9\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977820 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-combined-ca-bundle\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:41 crc kubenswrapper[4792]: I0216 21:58:41.977922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-public-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.013038 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.034840 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.069107 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrtnx" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.069328 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.069582 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093444 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-public-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093629 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-internal-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-scripts\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-config-data\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093841 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-credential-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.093911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-fernet-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.094071 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg9t9\" (UniqueName: \"kubernetes.io/projected/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-kube-api-access-dg9t9\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.094096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-combined-ca-bundle\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.106213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-config-data\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.109930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-credential-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.111176 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-scripts\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.111193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-internal-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.113161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-fernet-keys\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.121225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-combined-ca-bundle\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.131410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-public-tls-certs\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.144459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg9t9\" (UniqueName: \"kubernetes.io/projected/66dc0f43-b1f3-4acc-a189-5d4df2f08aeb-kube-api-access-dg9t9\") pod \"keystone-5978f67fb4-lxqn8\" (UID: \"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb\") " pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.146708 4792 generic.go:334] "Generic (PLEG): container finished" podID="196b6e8b-8689-469d-a348-455b4b9b701a" containerID="4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea" exitCode=0 Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.146895 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.154298 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166249 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166290 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" event={"ID":"196b6e8b-8689-469d-a348-455b4b9b701a","Type":"ContainerDied","Data":"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea"} Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-b84fx" event={"ID":"196b6e8b-8689-469d-a348-455b4b9b701a","Type":"ContainerDied","Data":"4562af2acc0b9db4f47f913ff3b9f67c338c16dea1148b62851469108b7f9b7b"} Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166345 4792 scope.go:117] "RemoveContainer" containerID="4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166546 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerStarted","Data":"57a1ba172d41bee6ec9de4e9541ccf03b6291834f9f3bdf34c4527795c990110"} Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerStarted","Data":"e3bba7580c0ce6a6ee2d1cbe9fafd053fd677cb20d4b6ec57f54c0fb6c0f43d8"} Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.166645 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.181508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.191113 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.195962 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtt2r\" (UniqueName: \"kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.196047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.196143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.196166 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.196189 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.196194 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.210762 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.242721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.292737 4792 scope.go:117] "RemoveContainer" containerID="4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.292869 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298381 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298447 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffmn\" (UniqueName: \"kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298485 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298543 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298791 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtt2r\" (UniqueName: \"kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298937 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66k9s\" (UniqueName: \"kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.298987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.299071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.299132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.299187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.299213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.299240 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.308112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.311970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.313977 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.319485 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-b84fx"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.348997 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.350830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.362493 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.373752 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.377706 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.391417 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-676b487647-vn2d7"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.392798 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtt2r\" (UniqueName: \"kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r\") pod \"barbican-keystone-listener-754cc64db8-4chxc\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.393911 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66k9s\" (UniqueName: \"kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402851 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffmn\" (UniqueName: \"kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402870 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402946 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.402974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.413440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.414066 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.414585 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.415317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.416133 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.420130 4792 scope.go:117] "RemoveContainer" containerID="4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.429652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.420648 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:42 crc kubenswrapper[4792]: E0216 21:58:42.443776 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea\": container with ID starting with 4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea not found: ID does not exist" containerID="4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.443821 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea"} err="failed to get container status \"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea\": rpc error: code = NotFound desc = could not find container \"4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea\": container with ID starting with 4b0ea5448f1efba56b5d08fd5665d5cea816298a686031c57801ee0b7c256eea not found: ID does not exist" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.443847 4792 scope.go:117] "RemoveContainer" containerID="4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa" Feb 16 21:58:42 crc kubenswrapper[4792]: E0216 21:58:42.454556 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa\": container with ID starting with 4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa not found: ID does not exist" containerID="4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.454623 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa"} err="failed to get container status \"4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa\": rpc error: code = NotFound desc = could not find container \"4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa\": container with ID starting with 4c13460f6536f30808aafc414968dabd1b9b731dbfd4cab81373002b5c8614fa not found: ID does not exist" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.455659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.456296 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.460458 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676b487647-vn2d7"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.465168 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffmn\" (UniqueName: \"kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn\") pod \"dnsmasq-dns-688c87cc99-5f244\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.480099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66k9s\" (UniqueName: \"kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s\") pod \"barbican-worker-7855c46fdc-mcbx4\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.480631 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d878f6fc4-w97vq"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.482454 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509550 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a098cc94-e931-444d-a61b-6d2c8e32f435-logs\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509632 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data-custom\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509842 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509900 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqf28\" (UniqueName: \"kubernetes.io/projected/a098cc94-e931-444d-a61b-6d2c8e32f435-kube-api-access-vqf28\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.509927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvrz\" (UniqueName: \"kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.510001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.510028 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-combined-ca-bundle\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.510052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.537133 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d878f6fc4-w97vq"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.543771 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.569100 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612699 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-combined-ca-bundle\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ff184ef-0e19-471a-b3b1-38e321e576cd-logs\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a098cc94-e931-444d-a61b-6d2c8e32f435-logs\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc8b2\" (UniqueName: \"kubernetes.io/projected/0ff184ef-0e19-471a-b3b1-38e321e576cd-kube-api-access-bc8b2\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data-custom\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.612991 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-combined-ca-bundle\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.613080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.613102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.613137 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqf28\" (UniqueName: \"kubernetes.io/projected/a098cc94-e931-444d-a61b-6d2c8e32f435-kube-api-access-vqf28\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.613157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvrz\" (UniqueName: \"kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.613187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data-custom\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.618086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.624051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-combined-ca-bundle\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.634659 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.635970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.636255 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.636407 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.637849 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a098cc94-e931-444d-a61b-6d2c8e32f435-logs\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.638108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data-custom\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.642143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a098cc94-e931-444d-a61b-6d2c8e32f435-config-data\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.647081 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.653174 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqf28\" (UniqueName: \"kubernetes.io/projected/a098cc94-e931-444d-a61b-6d2c8e32f435-kube-api-access-vqf28\") pod \"barbican-worker-676b487647-vn2d7\" (UID: \"a098cc94-e931-444d-a61b-6d2c8e32f435\") " pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.660223 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.665187 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvrz\" (UniqueName: \"kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz\") pod \"barbican-api-55b846578-qkqk8\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.714884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc8b2\" (UniqueName: \"kubernetes.io/projected/0ff184ef-0e19-471a-b3b1-38e321e576cd-kube-api-access-bc8b2\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.714960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-combined-ca-bundle\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.715167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data-custom\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.715236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.715330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ff184ef-0e19-471a-b3b1-38e321e576cd-logs\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.715964 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ff184ef-0e19-471a-b3b1-38e321e576cd-logs\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.726091 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-combined-ca-bundle\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.726531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data-custom\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.734447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff184ef-0e19-471a-b3b1-38e321e576cd-config-data\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.744179 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc8b2\" (UniqueName: \"kubernetes.io/projected/0ff184ef-0e19-471a-b3b1-38e321e576cd-kube-api-access-bc8b2\") pod \"barbican-keystone-listener-6d878f6fc4-w97vq\" (UID: \"0ff184ef-0e19-471a-b3b1-38e321e576cd\") " pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.846334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.846804 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.846862 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.847037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckgm\" (UniqueName: \"kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.847150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.848260 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676b487647-vn2d7" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.848584 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.882675 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.956469 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ckgm\" (UniqueName: \"kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.956553 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.956614 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.956750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.956781 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.958966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.966953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.971543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.977992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:42 crc kubenswrapper[4792]: I0216 21:58:42.990798 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ckgm\" (UniqueName: \"kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm\") pod \"barbican-api-84b44888c4-9ndb2\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.203999 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5978f67fb4-lxqn8"] Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.207146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerStarted","Data":"c408ae8f631e5d80a32f245a88269c418e88f194d7645790af7a8a0d7e072ca9"} Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.207945 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.227868 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.261128 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56979bc86d-lb4lw" podStartSLOduration=6.261106439 podStartE2EDuration="6.261106439s" podCreationTimestamp="2026-02-16 21:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:43.252463706 +0000 UTC m=+1255.905742597" watchObservedRunningTime="2026-02-16 21:58:43.261106439 +0000 UTC m=+1255.914385330" Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.442400 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.939805 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:43 crc kubenswrapper[4792]: I0216 21:58:43.960658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.099061 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196b6e8b-8689-469d-a348-455b4b9b701a" path="/var/lib/kubelet/pods/196b6e8b-8689-469d-a348-455b4b9b701a/volumes" Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.320559 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5978f67fb4-lxqn8" event={"ID":"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb","Type":"ContainerStarted","Data":"69ee39cd45499e09242b8cde8820cdf2d038e32c504b3d42919e149863087837"} Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.320902 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5978f67fb4-lxqn8" event={"ID":"66dc0f43-b1f3-4acc-a189-5d4df2f08aeb","Type":"ContainerStarted","Data":"d4267dad8110632ead2e83591cdb3e96d10e500f6dfe45ff41598ebcce08582c"} Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.321092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.324636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerStarted","Data":"c12ba5c4a3aa1a4e06c62b07cceea46dcb7363955b8a03a8ed90d1131da321c6"} Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.333225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-5f244" event={"ID":"6d33c31b-a60a-4f1e-bdf0-108837e3449c","Type":"ContainerStarted","Data":"dfb552a43c0ba77763d88e7d4da07374a18d507cda1dc261294e6e33809e49a0"} Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.338465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerStarted","Data":"c4d9d284c7b44e6377fd3b6d887d3eae7bb39f95b102b2c3138d16e490e1c4f8"} Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.338529 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.370730 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5978f67fb4-lxqn8" podStartSLOduration=3.370711421 podStartE2EDuration="3.370711421s" podCreationTimestamp="2026-02-16 21:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:44.367070412 +0000 UTC m=+1257.020349303" watchObservedRunningTime="2026-02-16 21:58:44.370711421 +0000 UTC m=+1257.023990312" Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.695814 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.733334 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676b487647-vn2d7"] Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.754701 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d878f6fc4-w97vq"] Feb 16 21:58:44 crc kubenswrapper[4792]: I0216 21:58:44.772861 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.219252 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.349991 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.373668 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.373811 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.437238 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.471644 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676b487647-vn2d7" event={"ID":"a098cc94-e931-444d-a61b-6d2c8e32f435","Type":"ContainerStarted","Data":"7e2c313ee5b2d546ae4a760de108e385b5220247fa2b0634350a18fa7531ae04"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.509724 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" event={"ID":"0ff184ef-0e19-471a-b3b1-38e321e576cd","Type":"ContainerStarted","Data":"3d031f658d9495f253c5f2a53ce9ee5576e64c90e30511a7692302e48d3017e1"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.521620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerStarted","Data":"1d37190d4620a0b8b15b183ad8e4218ea4e801273617467ea95e2e6f064ad3ae"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.542209 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jvjtg" event={"ID":"6432216a-a549-4060-8369-b6a0d86f1ba2","Type":"ContainerStarted","Data":"14e179d1594a1dad5a8b6bbc516a3156a3b7dfc968b1d4d68dc001b7f4b9502b"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.550280 4792 generic.go:334] "Generic (PLEG): container finished" podID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerID="39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb" exitCode=0 Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.550377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-5f244" event={"ID":"6d33c31b-a60a-4f1e-bdf0-108837e3449c","Type":"ContainerDied","Data":"39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.566366 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerStarted","Data":"dc1cb62bce339a8f712ecc149a1a3812e684e08f1b76e0d9a428f0d662f6f812"} Feb 16 21:58:45 crc kubenswrapper[4792]: I0216 21:58:45.598540 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jvjtg" podStartSLOduration=4.373412935 podStartE2EDuration="42.598522651s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:58:05.396469737 +0000 UTC m=+1218.049748628" lastFinishedPulling="2026-02-16 21:58:43.621579453 +0000 UTC m=+1256.274858344" observedRunningTime="2026-02-16 21:58:45.567320046 +0000 UTC m=+1258.220598947" watchObservedRunningTime="2026-02-16 21:58:45.598522651 +0000 UTC m=+1258.251801542" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.130122 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.175890 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-698d56d666-pskd9"] Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.180728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.186963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.191032 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.221944 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-698d56d666-pskd9"] Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.246346 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data-custom\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.246935 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmbb\" (UniqueName: \"kubernetes.io/projected/aba20562-d0b4-4de1-acaa-d0968fddb399-kube-api-access-hsmbb\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.247044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-combined-ca-bundle\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.247171 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-public-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.247221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.247248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba20562-d0b4-4de1-acaa-d0968fddb399-logs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.247274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-internal-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.349694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data-custom\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.349792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmbb\" (UniqueName: \"kubernetes.io/projected/aba20562-d0b4-4de1-acaa-d0968fddb399-kube-api-access-hsmbb\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.349858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-combined-ca-bundle\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.349944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-public-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.351502 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.351555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba20562-d0b4-4de1-acaa-d0968fddb399-logs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.351581 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-internal-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.352193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba20562-d0b4-4de1-acaa-d0968fddb399-logs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.365252 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.365752 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-combined-ca-bundle\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.365771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-public-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.366208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-internal-tls-certs\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.371101 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aba20562-d0b4-4de1-acaa-d0968fddb399-config-data-custom\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.378103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmbb\" (UniqueName: \"kubernetes.io/projected/aba20562-d0b4-4de1-acaa-d0968fddb399-kube-api-access-hsmbb\") pod \"barbican-api-698d56d666-pskd9\" (UID: \"aba20562-d0b4-4de1-acaa-d0968fddb399\") " pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.515672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.604694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-5f244" event={"ID":"6d33c31b-a60a-4f1e-bdf0-108837e3449c","Type":"ContainerStarted","Data":"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.604951 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.614130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerStarted","Data":"30d6ce3d7fa4be36ddc6cab1786a9e98437360df403a74ea7c307dfdbb3c02c6"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.614207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerStarted","Data":"7dc14f19510a407f74bbcf930bdc45733ef59da96982c01d1a1a7222496e436f"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.615940 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.615992 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.620221 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerStarted","Data":"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.620260 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerStarted","Data":"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.620819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.620882 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.643302 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-5f244" podStartSLOduration=4.643283198 podStartE2EDuration="4.643283198s" podCreationTimestamp="2026-02-16 21:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:46.634542972 +0000 UTC m=+1259.287821873" watchObservedRunningTime="2026-02-16 21:58:46.643283198 +0000 UTC m=+1259.296562089" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.646688 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-njp9q" event={"ID":"72d59609-2910-4114-98d4-0f5154b95b1b","Type":"ContainerStarted","Data":"6492ad36f33c8e7001262910a59cafca97908ab648f406003297b7c2fc2e33e0"} Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.686939 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55b846578-qkqk8" podStartSLOduration=4.686917558 podStartE2EDuration="4.686917558s" podCreationTimestamp="2026-02-16 21:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:46.671891321 +0000 UTC m=+1259.325170212" watchObservedRunningTime="2026-02-16 21:58:46.686917558 +0000 UTC m=+1259.340196449" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.723017 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84b44888c4-9ndb2" podStartSLOduration=4.723000125 podStartE2EDuration="4.723000125s" podCreationTimestamp="2026-02-16 21:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:46.706912919 +0000 UTC m=+1259.360191810" watchObservedRunningTime="2026-02-16 21:58:46.723000125 +0000 UTC m=+1259.376279016" Feb 16 21:58:46 crc kubenswrapper[4792]: I0216 21:58:46.744666 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-njp9q" podStartSLOduration=4.10762451 podStartE2EDuration="43.74464609s" podCreationTimestamp="2026-02-16 21:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:58:05.054903962 +0000 UTC m=+1217.708182853" lastFinishedPulling="2026-02-16 21:58:44.691925542 +0000 UTC m=+1257.345204433" observedRunningTime="2026-02-16 21:58:46.732218214 +0000 UTC m=+1259.385497115" watchObservedRunningTime="2026-02-16 21:58:46.74464609 +0000 UTC m=+1259.397924981" Feb 16 21:58:47 crc kubenswrapper[4792]: I0216 21:58:47.110073 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-698d56d666-pskd9"] Feb 16 21:58:47 crc kubenswrapper[4792]: W0216 21:58:47.329914 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba20562_d0b4_4de1_acaa_d0968fddb399.slice/crio-d790ec64110d0e02d231f08cfb7b8180daa3ce99ba02fca27e821530c1eb2381 WatchSource:0}: Error finding container d790ec64110d0e02d231f08cfb7b8180daa3ce99ba02fca27e821530c1eb2381: Status 404 returned error can't find the container with id d790ec64110d0e02d231f08cfb7b8180daa3ce99ba02fca27e821530c1eb2381 Feb 16 21:58:47 crc kubenswrapper[4792]: I0216 21:58:47.663011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-698d56d666-pskd9" event={"ID":"aba20562-d0b4-4de1-acaa-d0968fddb399","Type":"ContainerStarted","Data":"d790ec64110d0e02d231f08cfb7b8180daa3ce99ba02fca27e821530c1eb2381"} Feb 16 21:58:47 crc kubenswrapper[4792]: I0216 21:58:47.663528 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b846578-qkqk8" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api-log" containerID="cri-o://b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" gracePeriod=30 Feb 16 21:58:47 crc kubenswrapper[4792]: I0216 21:58:47.663560 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b846578-qkqk8" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api" containerID="cri-o://7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" gracePeriod=30 Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.431965 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.509561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom\") pod \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.509971 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs\") pod \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.510090 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data\") pod \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.510186 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvrz\" (UniqueName: \"kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz\") pod \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.510275 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle\") pod \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\" (UID: \"fbdffea5-e44f-429e-b62a-5e6bcf9f3131\") " Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.511054 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs" (OuterVolumeSpecName: "logs") pod "fbdffea5-e44f-429e-b62a-5e6bcf9f3131" (UID: "fbdffea5-e44f-429e-b62a-5e6bcf9f3131"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.517436 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz" (OuterVolumeSpecName: "kube-api-access-5fvrz") pod "fbdffea5-e44f-429e-b62a-5e6bcf9f3131" (UID: "fbdffea5-e44f-429e-b62a-5e6bcf9f3131"). InnerVolumeSpecName "kube-api-access-5fvrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.521103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbdffea5-e44f-429e-b62a-5e6bcf9f3131" (UID: "fbdffea5-e44f-429e-b62a-5e6bcf9f3131"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.613108 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvrz\" (UniqueName: \"kubernetes.io/projected/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-kube-api-access-5fvrz\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.613150 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.613161 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.616311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbdffea5-e44f-429e-b62a-5e6bcf9f3131" (UID: "fbdffea5-e44f-429e-b62a-5e6bcf9f3131"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.682734 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerStarted","Data":"a6d5b77277d64342cfdfbb73b1ed465359d13f068d660dd07711d4cf961f2866"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.684953 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-698d56d666-pskd9" event={"ID":"aba20562-d0b4-4de1-acaa-d0968fddb399","Type":"ContainerStarted","Data":"b23af12c47ad732a1c80fa0c3b2fc9b1bbf684e3cab107237171d4c7d7d491ae"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.685787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data" (OuterVolumeSpecName: "config-data") pod "fbdffea5-e44f-429e-b62a-5e6bcf9f3131" (UID: "fbdffea5-e44f-429e-b62a-5e6bcf9f3131"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.687881 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" event={"ID":"0ff184ef-0e19-471a-b3b1-38e321e576cd","Type":"ContainerStarted","Data":"cf8f1797d147ee5d0e6b2c2e3f14060b19da1d2e0ae9d3efcfe9241b7a360480"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690345 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerID="7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" exitCode=0 Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690372 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerID="b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" exitCode=143 Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerDied","Data":"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerDied","Data":"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b846578-qkqk8" event={"ID":"fbdffea5-e44f-429e-b62a-5e6bcf9f3131","Type":"ContainerDied","Data":"1d37190d4620a0b8b15b183ad8e4218ea4e801273617467ea95e2e6f064ad3ae"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690459 4792 scope.go:117] "RemoveContainer" containerID="7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.690615 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b846578-qkqk8" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.698210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerStarted","Data":"f05449ebc304e7a6125ca00b8374fada11f603f0318a4f7546ea0cfa9094ca70"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.700279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676b487647-vn2d7" event={"ID":"a098cc94-e931-444d-a61b-6d2c8e32f435","Type":"ContainerStarted","Data":"9043abaeb73d73fd22c5a0a7e94b32e172be715699bae8f1eb73133fc18716fd"} Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.715016 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.715050 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdffea5-e44f-429e-b62a-5e6bcf9f3131-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.731883 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7855c46fdc-mcbx4" podStartSLOduration=3.869402484 podStartE2EDuration="7.731860057s" podCreationTimestamp="2026-02-16 21:58:41 +0000 UTC" firstStartedPulling="2026-02-16 21:58:43.984773749 +0000 UTC m=+1256.638052640" lastFinishedPulling="2026-02-16 21:58:47.847231322 +0000 UTC m=+1260.500510213" observedRunningTime="2026-02-16 21:58:48.718886525 +0000 UTC m=+1261.372165416" watchObservedRunningTime="2026-02-16 21:58:48.731860057 +0000 UTC m=+1261.385138938" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.757178 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.759733 4792 scope.go:117] "RemoveContainer" containerID="b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.769242 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55b846578-qkqk8"] Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.787656 4792 scope.go:117] "RemoveContainer" containerID="7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" Feb 16 21:58:48 crc kubenswrapper[4792]: E0216 21:58:48.788143 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840\": container with ID starting with 7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840 not found: ID does not exist" containerID="7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.788169 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840"} err="failed to get container status \"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840\": rpc error: code = NotFound desc = could not find container \"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840\": container with ID starting with 7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840 not found: ID does not exist" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.788191 4792 scope.go:117] "RemoveContainer" containerID="b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" Feb 16 21:58:48 crc kubenswrapper[4792]: E0216 21:58:48.789074 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd\": container with ID starting with b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd not found: ID does not exist" containerID="b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.789099 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd"} err="failed to get container status \"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd\": rpc error: code = NotFound desc = could not find container \"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd\": container with ID starting with b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd not found: ID does not exist" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.789113 4792 scope.go:117] "RemoveContainer" containerID="7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.789802 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840"} err="failed to get container status \"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840\": rpc error: code = NotFound desc = could not find container \"7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840\": container with ID starting with 7176ef89d1d019da53571fe18313afffcefce6583bed399e72bfb67840ec0840 not found: ID does not exist" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.789843 4792 scope.go:117] "RemoveContainer" containerID="b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd" Feb 16 21:58:48 crc kubenswrapper[4792]: I0216 21:58:48.790121 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd"} err="failed to get container status \"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd\": rpc error: code = NotFound desc = could not find container \"b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd\": container with ID starting with b95804e1d54bbca444cd614fdfdab02280fb6918ea5889c59a8064341a9546dd not found: ID does not exist" Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.715757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-698d56d666-pskd9" event={"ID":"aba20562-d0b4-4de1-acaa-d0968fddb399","Type":"ContainerStarted","Data":"93e35e8c8539c039452a3d3740cbe2bc735c1eae80d4a5f2612c6ab6c7783e81"} Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.716332 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.720255 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" event={"ID":"0ff184ef-0e19-471a-b3b1-38e321e576cd","Type":"ContainerStarted","Data":"3596834338d1a2cf85c7ee703d36f7f4c9823605661b5ec52a5702c7d2dd0cd6"} Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.724853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerStarted","Data":"96f3c67fef5fa3064328203bcaa69ecbd74d3ab11c1d0ca0b014261a3b51bd3e"} Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.727994 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676b487647-vn2d7" event={"ID":"a098cc94-e931-444d-a61b-6d2c8e32f435","Type":"ContainerStarted","Data":"03ef9a776116c44938ea70d27a0cc524285cf0c7746c40d99ecf5daebb56e470"} Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.745578 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-698d56d666-pskd9" podStartSLOduration=3.745542813 podStartE2EDuration="3.745542813s" podCreationTimestamp="2026-02-16 21:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:49.743497417 +0000 UTC m=+1262.396776298" watchObservedRunningTime="2026-02-16 21:58:49.745542813 +0000 UTC m=+1262.398821704" Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.745635 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerStarted","Data":"698fd71e6686508a6c16e0dacb8c6f3f8e6c766a747e7e79664955ddc7b1d262"} Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.772462 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d878f6fc4-w97vq" podStartSLOduration=4.739892212 podStartE2EDuration="7.77241489s" podCreationTimestamp="2026-02-16 21:58:42 +0000 UTC" firstStartedPulling="2026-02-16 21:58:44.809898103 +0000 UTC m=+1257.463176994" lastFinishedPulling="2026-02-16 21:58:47.842420781 +0000 UTC m=+1260.495699672" observedRunningTime="2026-02-16 21:58:49.762925393 +0000 UTC m=+1262.416204294" watchObservedRunningTime="2026-02-16 21:58:49.77241489 +0000 UTC m=+1262.425693781" Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.789875 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-676b487647-vn2d7" podStartSLOduration=4.770757506 podStartE2EDuration="7.789852081s" podCreationTimestamp="2026-02-16 21:58:42 +0000 UTC" firstStartedPulling="2026-02-16 21:58:44.827215412 +0000 UTC m=+1257.480494303" lastFinishedPulling="2026-02-16 21:58:47.846309987 +0000 UTC m=+1260.499588878" observedRunningTime="2026-02-16 21:58:49.781359691 +0000 UTC m=+1262.434638602" watchObservedRunningTime="2026-02-16 21:58:49.789852081 +0000 UTC m=+1262.443130972" Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.819935 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.834906 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:49 crc kubenswrapper[4792]: I0216 21:58:49.837036 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" podStartSLOduration=4.449765317 podStartE2EDuration="8.837017348s" podCreationTimestamp="2026-02-16 21:58:41 +0000 UTC" firstStartedPulling="2026-02-16 21:58:43.45514834 +0000 UTC m=+1256.108427231" lastFinishedPulling="2026-02-16 21:58:47.842400371 +0000 UTC m=+1260.495679262" observedRunningTime="2026-02-16 21:58:49.82493345 +0000 UTC m=+1262.478212341" watchObservedRunningTime="2026-02-16 21:58:49.837017348 +0000 UTC m=+1262.490296239" Feb 16 21:58:50 crc kubenswrapper[4792]: I0216 21:58:50.047877 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" path="/var/lib/kubelet/pods/fbdffea5-e44f-429e-b62a-5e6bcf9f3131/volumes" Feb 16 21:58:50 crc kubenswrapper[4792]: I0216 21:58:50.770829 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:51 crc kubenswrapper[4792]: I0216 21:58:51.772368 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener-log" containerID="cri-o://a6d5b77277d64342cfdfbb73b1ed465359d13f068d660dd07711d4cf961f2866" gracePeriod=30 Feb 16 21:58:51 crc kubenswrapper[4792]: I0216 21:58:51.772488 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener" containerID="cri-o://698fd71e6686508a6c16e0dacb8c6f3f8e6c766a747e7e79664955ddc7b1d262" gracePeriod=30 Feb 16 21:58:51 crc kubenswrapper[4792]: I0216 21:58:51.772529 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7855c46fdc-mcbx4" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker-log" containerID="cri-o://f05449ebc304e7a6125ca00b8374fada11f603f0318a4f7546ea0cfa9094ca70" gracePeriod=30 Feb 16 21:58:51 crc kubenswrapper[4792]: I0216 21:58:51.772545 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7855c46fdc-mcbx4" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker" containerID="cri-o://96f3c67fef5fa3064328203bcaa69ecbd74d3ab11c1d0ca0b014261a3b51bd3e" gracePeriod=30 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.571757 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.656305 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.656567 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="dnsmasq-dns" containerID="cri-o://42a2913e9ff4076b6bdc79ed2870d4c4983f7b4d79f23ec882385d293aae48f8" gracePeriod=10 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.791804 4792 generic.go:334] "Generic (PLEG): container finished" podID="72d59609-2910-4114-98d4-0f5154b95b1b" containerID="6492ad36f33c8e7001262910a59cafca97908ab648f406003297b7c2fc2e33e0" exitCode=0 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.791893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-njp9q" event={"ID":"72d59609-2910-4114-98d4-0f5154b95b1b","Type":"ContainerDied","Data":"6492ad36f33c8e7001262910a59cafca97908ab648f406003297b7c2fc2e33e0"} Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.796816 4792 generic.go:334] "Generic (PLEG): container finished" podID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerID="698fd71e6686508a6c16e0dacb8c6f3f8e6c766a747e7e79664955ddc7b1d262" exitCode=0 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.796848 4792 generic.go:334] "Generic (PLEG): container finished" podID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerID="a6d5b77277d64342cfdfbb73b1ed465359d13f068d660dd07711d4cf961f2866" exitCode=143 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.796908 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerDied","Data":"698fd71e6686508a6c16e0dacb8c6f3f8e6c766a747e7e79664955ddc7b1d262"} Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.797853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerDied","Data":"a6d5b77277d64342cfdfbb73b1ed465359d13f068d660dd07711d4cf961f2866"} Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.804620 4792 generic.go:334] "Generic (PLEG): container finished" podID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerID="96f3c67fef5fa3064328203bcaa69ecbd74d3ab11c1d0ca0b014261a3b51bd3e" exitCode=0 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.804643 4792 generic.go:334] "Generic (PLEG): container finished" podID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerID="f05449ebc304e7a6125ca00b8374fada11f603f0318a4f7546ea0cfa9094ca70" exitCode=143 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.804704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerDied","Data":"96f3c67fef5fa3064328203bcaa69ecbd74d3ab11c1d0ca0b014261a3b51bd3e"} Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.804725 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerDied","Data":"f05449ebc304e7a6125ca00b8374fada11f603f0318a4f7546ea0cfa9094ca70"} Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.814732 4792 generic.go:334] "Generic (PLEG): container finished" podID="6432216a-a549-4060-8369-b6a0d86f1ba2" containerID="14e179d1594a1dad5a8b6bbc516a3156a3b7dfc968b1d4d68dc001b7f4b9502b" exitCode=0 Feb 16 21:58:52 crc kubenswrapper[4792]: I0216 21:58:52.814778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jvjtg" event={"ID":"6432216a-a549-4060-8369-b6a0d86f1ba2","Type":"ContainerDied","Data":"14e179d1594a1dad5a8b6bbc516a3156a3b7dfc968b1d4d68dc001b7f4b9502b"} Feb 16 21:58:53 crc kubenswrapper[4792]: I0216 21:58:53.871802 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerID="42a2913e9ff4076b6bdc79ed2870d4c4983f7b4d79f23ec882385d293aae48f8" exitCode=0 Feb 16 21:58:53 crc kubenswrapper[4792]: I0216 21:58:53.872235 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" event={"ID":"0a4bbdfa-4451-4626-994d-1334856bd30f","Type":"ContainerDied","Data":"42a2913e9ff4076b6bdc79ed2870d4c4983f7b4d79f23ec882385d293aae48f8"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.887190 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.899016 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.900184 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.935714 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-njp9q" event={"ID":"72d59609-2910-4114-98d4-0f5154b95b1b","Type":"ContainerDied","Data":"c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.935754 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c160c858a009d23b9d5e62dbf76e889e510850dc202f93a9d5844504b896f8" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.947408 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.947436 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-qqbl5" event={"ID":"0a4bbdfa-4451-4626-994d-1334856bd30f","Type":"ContainerDied","Data":"9833badc3c249eee2715f14690b315748a4674132fb9e1f02b964aa8681b6387"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.947484 4792 scope.go:117] "RemoveContainer" containerID="42a2913e9ff4076b6bdc79ed2870d4c4983f7b4d79f23ec882385d293aae48f8" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.954721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jvjtg" event={"ID":"6432216a-a549-4060-8369-b6a0d86f1ba2","Type":"ContainerDied","Data":"3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.954759 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7a50d01a4c5822289cb914d81d98aa8550b0de6756e7b961f87e3a92c54bba" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.957543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855c46fdc-mcbx4" event={"ID":"dd23f854-6ce4-49bf-b4ad-26546127bc2c","Type":"ContainerDied","Data":"c12ba5c4a3aa1a4e06c62b07cceea46dcb7363955b8a03a8ed90d1131da321c6"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.957589 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12ba5c4a3aa1a4e06c62b07cceea46dcb7363955b8a03a8ed90d1131da321c6" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.959683 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" event={"ID":"16cbf895-7e69-4422-be8e-ada6728e74d7","Type":"ContainerDied","Data":"c4d9d284c7b44e6377fd3b6d887d3eae7bb39f95b102b2c3138d16e490e1c4f8"} Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.959745 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754cc64db8-4chxc" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.964535 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:54 crc kubenswrapper[4792]: I0216 21:58:54.993064 4792 scope.go:117] "RemoveContainer" containerID="29cde44eba16c61f0b26b84931e1461db7bf00f1c6c1a6929cdd17fa46c13172" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.000499 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.002414 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.014372 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.021749 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data\") pod \"72d59609-2910-4114-98d4-0f5154b95b1b\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.021798 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m7ck\" (UniqueName: \"kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck\") pod \"72d59609-2910-4114-98d4-0f5154b95b1b\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.021858 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs\") pod \"16cbf895-7e69-4422-be8e-ada6728e74d7\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.021984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle\") pod \"72d59609-2910-4114-98d4-0f5154b95b1b\" (UID: \"72d59609-2910-4114-98d4-0f5154b95b1b\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022027 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle\") pod \"16cbf895-7e69-4422-be8e-ada6728e74d7\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022054 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022071 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom\") pod \"16cbf895-7e69-4422-be8e-ada6728e74d7\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022178 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtt2r\" (UniqueName: \"kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r\") pod \"16cbf895-7e69-4422-be8e-ada6728e74d7\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022264 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8njsd\" (UniqueName: \"kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data\") pod \"16cbf895-7e69-4422-be8e-ada6728e74d7\" (UID: \"16cbf895-7e69-4422-be8e-ada6728e74d7\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022354 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022396 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.022411 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config\") pod \"0a4bbdfa-4451-4626-994d-1334856bd30f\" (UID: \"0a4bbdfa-4451-4626-994d-1334856bd30f\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.030524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck" (OuterVolumeSpecName: "kube-api-access-6m7ck") pod "72d59609-2910-4114-98d4-0f5154b95b1b" (UID: "72d59609-2910-4114-98d4-0f5154b95b1b"). InnerVolumeSpecName "kube-api-access-6m7ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.035942 4792 scope.go:117] "RemoveContainer" containerID="698fd71e6686508a6c16e0dacb8c6f3f8e6c766a747e7e79664955ddc7b1d262" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.037685 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs" (OuterVolumeSpecName: "logs") pod "16cbf895-7e69-4422-be8e-ada6728e74d7" (UID: "16cbf895-7e69-4422-be8e-ada6728e74d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.050768 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16cbf895-7e69-4422-be8e-ada6728e74d7" (UID: "16cbf895-7e69-4422-be8e-ada6728e74d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.054320 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r" (OuterVolumeSpecName: "kube-api-access-dtt2r") pod "16cbf895-7e69-4422-be8e-ada6728e74d7" (UID: "16cbf895-7e69-4422-be8e-ada6728e74d7"). InnerVolumeSpecName "kube-api-access-dtt2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.066845 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd" (OuterVolumeSpecName: "kube-api-access-8njsd") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "kube-api-access-8njsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.081032 4792 scope.go:117] "RemoveContainer" containerID="a6d5b77277d64342cfdfbb73b1ed465359d13f068d660dd07711d4cf961f2866" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.111154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16cbf895-7e69-4422-be8e-ada6728e74d7" (UID: "16cbf895-7e69-4422-be8e-ada6728e74d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125534 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom\") pod \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125725 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125776 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle\") pod \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125891 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs\") pod \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.125911 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.126027 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data\") pod \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.126071 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.126094 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66k9s\" (UniqueName: \"kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s\") pod \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\" (UID: \"dd23f854-6ce4-49bf-b4ad-26546127bc2c\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.126149 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8428q\" (UniqueName: \"kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q\") pod \"6432216a-a549-4060-8369-b6a0d86f1ba2\" (UID: \"6432216a-a549-4060-8369-b6a0d86f1ba2\") " Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.126375 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.128422 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs" (OuterVolumeSpecName: "logs") pod "dd23f854-6ce4-49bf-b4ad-26546127bc2c" (UID: "dd23f854-6ce4-49bf-b4ad-26546127bc2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136210 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6432216a-a549-4060-8369-b6a0d86f1ba2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136248 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd23f854-6ce4-49bf-b4ad-26546127bc2c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136263 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m7ck\" (UniqueName: \"kubernetes.io/projected/72d59609-2910-4114-98d4-0f5154b95b1b-kube-api-access-6m7ck\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136277 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16cbf895-7e69-4422-be8e-ada6728e74d7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136288 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136300 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136313 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtt2r\" (UniqueName: \"kubernetes.io/projected/16cbf895-7e69-4422-be8e-ada6728e74d7-kube-api-access-dtt2r\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.136325 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8njsd\" (UniqueName: \"kubernetes.io/projected/0a4bbdfa-4451-4626-994d-1334856bd30f-kube-api-access-8njsd\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.155300 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd23f854-6ce4-49bf-b4ad-26546127bc2c" (UID: "dd23f854-6ce4-49bf-b4ad-26546127bc2c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.155628 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.156095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q" (OuterVolumeSpecName: "kube-api-access-8428q") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "kube-api-access-8428q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.157335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s" (OuterVolumeSpecName: "kube-api-access-66k9s") pod "dd23f854-6ce4-49bf-b4ad-26546127bc2c" (UID: "dd23f854-6ce4-49bf-b4ad-26546127bc2c"). InnerVolumeSpecName "kube-api-access-66k9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.158803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts" (OuterVolumeSpecName: "scripts") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.238300 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.238334 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.238347 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66k9s\" (UniqueName: \"kubernetes.io/projected/dd23f854-6ce4-49bf-b4ad-26546127bc2c-kube-api-access-66k9s\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.238362 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8428q\" (UniqueName: \"kubernetes.io/projected/6432216a-a549-4060-8369-b6a0d86f1ba2-kube-api-access-8428q\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.238371 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.253690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72d59609-2910-4114-98d4-0f5154b95b1b" (UID: "72d59609-2910-4114-98d4-0f5154b95b1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.257705 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd23f854-6ce4-49bf-b4ad-26546127bc2c" (UID: "dd23f854-6ce4-49bf-b4ad-26546127bc2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: E0216 21:58:55.258148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.260017 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.284127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data" (OuterVolumeSpecName: "config-data") pod "16cbf895-7e69-4422-be8e-ada6728e74d7" (UID: "16cbf895-7e69-4422-be8e-ada6728e74d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.291544 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.299198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config" (OuterVolumeSpecName: "config") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.307875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.309412 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.326967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a4bbdfa-4451-4626-994d-1334856bd30f" (UID: "0a4bbdfa-4451-4626-994d-1334856bd30f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.326964 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data" (OuterVolumeSpecName: "config-data") pod "72d59609-2910-4114-98d4-0f5154b95b1b" (UID: "72d59609-2910-4114-98d4-0f5154b95b1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.331335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data" (OuterVolumeSpecName: "config-data") pod "dd23f854-6ce4-49bf-b4ad-26546127bc2c" (UID: "dd23f854-6ce4-49bf-b4ad-26546127bc2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340050 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340092 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340105 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d59609-2910-4114-98d4-0f5154b95b1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340116 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340125 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16cbf895-7e69-4422-be8e-ada6728e74d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340133 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340142 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340151 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340159 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340168 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4bbdfa-4451-4626-994d-1334856bd30f-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.340176 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd23f854-6ce4-49bf-b4ad-26546127bc2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.343004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data" (OuterVolumeSpecName: "config-data") pod "6432216a-a549-4060-8369-b6a0d86f1ba2" (UID: "6432216a-a549-4060-8369-b6a0d86f1ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.441878 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6432216a-a549-4060-8369-b6a0d86f1ba2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.591530 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.598568 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-qqbl5"] Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.618725 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.620696 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-754cc64db8-4chxc"] Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.972071 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerStarted","Data":"1d869bb91d8f454ad43d26eb88fced8b4bea4b62b1612c8948e707e42ba710ce"} Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.972210 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="ceilometer-notification-agent" containerID="cri-o://f751dc4120e69b078dffc2224f8e0b13cefeeca2f0e9ad23bf9cd001474ebe18" gracePeriod=30 Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.972269 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="proxy-httpd" containerID="cri-o://1d869bb91d8f454ad43d26eb88fced8b4bea4b62b1612c8948e707e42ba710ce" gracePeriod=30 Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.972271 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="sg-core" containerID="cri-o://11cdc2bac82de5e912425bbdf0e165de3601044d447be4c97d6aef3d7abd1a74" gracePeriod=30 Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.972766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.973170 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-njp9q" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.989976 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855c46fdc-mcbx4" Feb 16 21:58:55 crc kubenswrapper[4792]: I0216 21:58:55.998543 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jvjtg" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.041912 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" path="/var/lib/kubelet/pods/0a4bbdfa-4451-4626-994d-1334856bd30f/volumes" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.042985 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" path="/var/lib/kubelet/pods/16cbf895-7e69-4422-be8e-ada6728e74d7/volumes" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.099653 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.114516 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7855c46fdc-mcbx4"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.299961 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.300909 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.300926 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.300942 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="dnsmasq-dns" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.300948 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="dnsmasq-dns" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.300954 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" containerName="cinder-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.300961 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" containerName="cinder-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.300974 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.300980 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker-log" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301008 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301014 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301040 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="init" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301046 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="init" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301058 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301063 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api-log" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301076 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301082 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301090 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301096 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener-log" Feb 16 21:58:56 crc kubenswrapper[4792]: E0216 21:58:56.301112 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" containerName="heat-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301117 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" containerName="heat-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301295 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301305 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" containerName="heat-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301380 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301395 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdffea5-e44f-429e-b62a-5e6bcf9f3131" containerName="barbican-api" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301405 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4bbdfa-4451-4626-994d-1334856bd30f" containerName="dnsmasq-dns" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301417 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301426 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cbf895-7e69-4422-be8e-ada6728e74d7" containerName="barbican-keystone-listener" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301439 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" containerName="cinder-db-sync" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.301448 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" containerName="barbican-worker-log" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.325471 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.330802 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.331166 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hn26t" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.332792 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.339074 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.390976 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.483757 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.486395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.492975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklws\" (UniqueName: \"kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.493030 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.493075 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.493146 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.493167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.493223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.502887 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596491 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596671 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596710 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxwlf\" (UniqueName: \"kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596767 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596843 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.596911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklws\" (UniqueName: \"kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.597223 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.603361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.605003 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.606964 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.607741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.623038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklws\" (UniqueName: \"kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws\") pod \"cinder-scheduler-0\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.642012 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.646828 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.650489 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.650519 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.685115 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.700837 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxwlf\" (UniqueName: \"kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.700921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.700940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.700984 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.701024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.701066 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.702441 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.720551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.727524 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.728565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.729732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.758855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxwlf\" (UniqueName: \"kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf\") pod \"dnsmasq-dns-6bb4fc677f-j8dss\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.815319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.831544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.831659 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.831706 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.831848 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.832042 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.832130 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tg4\" (UniqueName: \"kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.832359 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.934987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935262 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2tg4\" (UniqueName: \"kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935423 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.935520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.940841 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.945751 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.947693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.949171 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.962113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.964707 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:56 crc kubenswrapper[4792]: I0216 21:58:56.972194 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2tg4\" (UniqueName: \"kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4\") pod \"cinder-api-0\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " pod="openstack/cinder-api-0" Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.022076 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerID="1d869bb91d8f454ad43d26eb88fced8b4bea4b62b1612c8948e707e42ba710ce" exitCode=0 Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.022106 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerID="11cdc2bac82de5e912425bbdf0e165de3601044d447be4c97d6aef3d7abd1a74" exitCode=2 Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.022126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerDied","Data":"1d869bb91d8f454ad43d26eb88fced8b4bea4b62b1612c8948e707e42ba710ce"} Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.022150 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerDied","Data":"11cdc2bac82de5e912425bbdf0e165de3601044d447be4c97d6aef3d7abd1a74"} Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.137928 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.424288 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.452910 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:58:57 crc kubenswrapper[4792]: I0216 21:58:57.769251 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.046746 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd23f854-6ce4-49bf-b4ad-26546127bc2c" path="/var/lib/kubelet/pods/dd23f854-6ce4-49bf-b4ad-26546127bc2c/volumes" Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.052719 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerStarted","Data":"9aa7c15109006a533cef3cbe55780fa75cd591f2ab947b23023e14ee41136a43"} Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.052936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerStarted","Data":"d1dd97f88d0984a0528f81739405f92d29a428257d1b9a2f9267b497e1f38eff"} Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.068002 4792 generic.go:334] "Generic (PLEG): container finished" podID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerID="165021f89fd6e54d747800b5ccf191c979959895dc4beea875723407cb23715a" exitCode=0 Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.068044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" event={"ID":"78e87464-f75c-47e0-b302-a98fe79d4f43","Type":"ContainerDied","Data":"165021f89fd6e54d747800b5ccf191c979959895dc4beea875723407cb23715a"} Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.068073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" event={"ID":"78e87464-f75c-47e0-b302-a98fe79d4f43","Type":"ContainerStarted","Data":"1801d7301bd3c0dd05c7cd28c62fff34be213c5c245bb82f2f0a2345f4e3801d"} Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.472937 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:58:58 crc kubenswrapper[4792]: I0216 21:58:58.834853 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.111073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerStarted","Data":"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0"} Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.119682 4792 generic.go:334] "Generic (PLEG): container finished" podID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerID="f751dc4120e69b078dffc2224f8e0b13cefeeca2f0e9ad23bf9cd001474ebe18" exitCode=0 Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.119739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerDied","Data":"f751dc4120e69b078dffc2224f8e0b13cefeeca2f0e9ad23bf9cd001474ebe18"} Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.142814 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" event={"ID":"78e87464-f75c-47e0-b302-a98fe79d4f43","Type":"ContainerStarted","Data":"92d738d240a38d7d5c41ed8b98c5cba777ecf76891f71e9d7d1bc73e6200c095"} Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.148795 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.169627 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" podStartSLOduration=3.169592339 podStartE2EDuration="3.169592339s" podCreationTimestamp="2026-02-16 21:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:58:59.163753161 +0000 UTC m=+1271.817032062" watchObservedRunningTime="2026-02-16 21:58:59.169592339 +0000 UTC m=+1271.822871240" Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.414819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-698d56d666-pskd9" Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.500102 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.500329 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84b44888c4-9ndb2" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" containerID="cri-o://7dc14f19510a407f74bbcf930bdc45733ef59da96982c01d1a1a7222496e436f" gracePeriod=30 Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.500761 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84b44888c4-9ndb2" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api" containerID="cri-o://30d6ce3d7fa4be36ddc6cab1786a9e98437360df403a74ea7c307dfdbb3c02c6" gracePeriod=30 Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.544226 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b44888c4-9ndb2" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": EOF" Feb 16 21:58:59 crc kubenswrapper[4792]: I0216 21:58:59.920712 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.036993 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037138 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz898\" (UniqueName: \"kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037252 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037273 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle\") pod \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\" (UID: \"fbad2630-a4ca-43fc-8c09-2c127888d3f4\") " Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037572 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037743 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037853 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.037869 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbad2630-a4ca-43fc-8c09-2c127888d3f4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.044363 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898" (OuterVolumeSpecName: "kube-api-access-gz898") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "kube-api-access-gz898". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.049828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts" (OuterVolumeSpecName: "scripts") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.141307 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.141364 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz898\" (UniqueName: \"kubernetes.io/projected/fbad2630-a4ca-43fc-8c09-2c127888d3f4-kube-api-access-gz898\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.171391 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data" (OuterVolumeSpecName: "config-data") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.188486 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerStarted","Data":"ce7b1af64ba62a7f4ed42c58e4ca2ff5e185862fa80f4322aeb2aa01e29c07ea"} Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.191908 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.199804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fbad2630-a4ca-43fc-8c09-2c127888d3f4","Type":"ContainerDied","Data":"1831770f20826491de119ed39ddc11e2c9bd4cf81c41041097d055e4d764976f"} Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.199886 4792 scope.go:117] "RemoveContainer" containerID="1d869bb91d8f454ad43d26eb88fced8b4bea4b62b1612c8948e707e42ba710ce" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.200126 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.212321 4792 generic.go:334] "Generic (PLEG): container finished" podID="29861710-f00a-4c5b-9e57-e116983057ee" containerID="7dc14f19510a407f74bbcf930bdc45733ef59da96982c01d1a1a7222496e436f" exitCode=143 Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.212706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerDied","Data":"7dc14f19510a407f74bbcf930bdc45733ef59da96982c01d1a1a7222496e436f"} Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.221441 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbad2630-a4ca-43fc-8c09-2c127888d3f4" (UID: "fbad2630-a4ca-43fc-8c09-2c127888d3f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.246589 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.246883 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.246902 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbad2630-a4ca-43fc-8c09-2c127888d3f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.247896 4792 scope.go:117] "RemoveContainer" containerID="11cdc2bac82de5e912425bbdf0e165de3601044d447be4c97d6aef3d7abd1a74" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.268788 4792 scope.go:117] "RemoveContainer" containerID="f751dc4120e69b078dffc2224f8e0b13cefeeca2f0e9ad23bf9cd001474ebe18" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.586742 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.621534 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.633987 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:00 crc kubenswrapper[4792]: E0216 21:59:00.634412 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="proxy-httpd" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634430 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="proxy-httpd" Feb 16 21:59:00 crc kubenswrapper[4792]: E0216 21:59:00.634453 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="sg-core" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634464 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="sg-core" Feb 16 21:59:00 crc kubenswrapper[4792]: E0216 21:59:00.634498 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="ceilometer-notification-agent" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634506 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="ceilometer-notification-agent" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634744 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="proxy-httpd" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634765 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="ceilometer-notification-agent" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.634775 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" containerName="sg-core" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.636844 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.640398 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.640647 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.655557 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.755474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.755515 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6v2\" (UniqueName: \"kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.755869 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.755921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.755978 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.756073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.756187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.836448 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.857795 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.857838 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.857868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.857910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.857957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.858056 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.858077 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6v2\" (UniqueName: \"kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.858633 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.858741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.864425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.865424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.865870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.921480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6v2\" (UniqueName: \"kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.944478 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " pod="openstack/ceilometer-0" Feb 16 21:59:00 crc kubenswrapper[4792]: I0216 21:59:00.966555 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.291080 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.291747 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7686fdb8c5-qzv2j" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-api" containerID="cri-o://7b5c510268f2f3057462dc91616df0420871dfb753c6a50bad8fb3ec29ce3bc2" gracePeriod=30 Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.293776 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7686fdb8c5-qzv2j" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" containerID="cri-o://f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b" gracePeriod=30 Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.311846 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerStarted","Data":"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5"} Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.312045 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api-log" containerID="cri-o://ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" gracePeriod=30 Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.312303 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.312351 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api" containerID="cri-o://f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" gracePeriod=30 Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.349241 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58f4767d9c-gk2k8"] Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.378141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerStarted","Data":"c53d260a5c073f4a966db2b8fd05bdb2e317e8fd67c94e33f585f51cd2662ca1"} Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.378307 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.378637 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58f4767d9c-gk2k8"] Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.380461 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.380441256 podStartE2EDuration="5.380441256s" podCreationTimestamp="2026-02-16 21:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:01.34917411 +0000 UTC m=+1274.002453001" watchObservedRunningTime="2026-02-16 21:59:01.380441256 +0000 UTC m=+1274.033720167" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.433315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-internal-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.441040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-combined-ca-bundle\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.444893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-ovndb-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.445260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-httpd-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.445343 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lsk\" (UniqueName: \"kubernetes.io/projected/1a645a10-4e7b-42ed-a764-9cafab1d6086-kube-api-access-l9lsk\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.445572 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.445675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-public-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.460096 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.49388517 podStartE2EDuration="5.460077671s" podCreationTimestamp="2026-02-16 21:58:56 +0000 UTC" firstStartedPulling="2026-02-16 21:58:57.455617686 +0000 UTC m=+1270.108896577" lastFinishedPulling="2026-02-16 21:58:58.421810187 +0000 UTC m=+1271.075089078" observedRunningTime="2026-02-16 21:59:01.39498103 +0000 UTC m=+1274.048259931" watchObservedRunningTime="2026-02-16 21:59:01.460077671 +0000 UTC m=+1274.113356562" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.478591 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7686fdb8c5-qzv2j" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9696/\": read tcp 10.217.0.2:46754->10.217.0.193:9696: read: connection reset by peer" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.548461 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-public-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.548561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-internal-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.548698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-combined-ca-bundle\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.548943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-ovndb-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.549503 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-httpd-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.549546 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lsk\" (UniqueName: \"kubernetes.io/projected/1a645a10-4e7b-42ed-a764-9cafab1d6086-kube-api-access-l9lsk\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.549773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.556560 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-httpd-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.559714 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-public-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.560400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-ovndb-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.561479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-internal-tls-certs\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.563135 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-combined-ca-bundle\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.569934 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a645a10-4e7b-42ed-a764-9cafab1d6086-config\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.572261 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lsk\" (UniqueName: \"kubernetes.io/projected/1a645a10-4e7b-42ed-a764-9cafab1d6086-kube-api-access-l9lsk\") pod \"neutron-58f4767d9c-gk2k8\" (UID: \"1a645a10-4e7b-42ed-a764-9cafab1d6086\") " pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.687270 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:59:01 crc kubenswrapper[4792]: E0216 21:59:01.727455 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d66ae12_bf74_43a9_98d3_c5c19b097ed7.slice/crio-ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d986148_8fca_429d_a235_1d41a3238710.slice/crio-f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.738268 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:01 crc kubenswrapper[4792]: I0216 21:59:01.824103 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.050355 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbad2630-a4ca-43fc-8c09-2c127888d3f4" path="/var/lib/kubelet/pods/fbad2630-a4ca-43fc-8c09-2c127888d3f4/volumes" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.155507 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.273275 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.273334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2tg4\" (UniqueName: \"kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.275328 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.275432 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.275638 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.275673 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.275702 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id\") pod \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\" (UID: \"1d66ae12-bf74-43a9-98d3-c5c19b097ed7\") " Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.276899 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.282688 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts" (OuterVolumeSpecName: "scripts") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.284785 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4" (OuterVolumeSpecName: "kube-api-access-t2tg4") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "kube-api-access-t2tg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.285464 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs" (OuterVolumeSpecName: "logs") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.294804 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.327286 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.343069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data" (OuterVolumeSpecName: "config-data") pod "1d66ae12-bf74-43a9-98d3-c5c19b097ed7" (UID: "1d66ae12-bf74-43a9-98d3-c5c19b097ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.379810 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380074 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380150 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380236 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380322 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2tg4\" (UniqueName: \"kubernetes.io/projected/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-kube-api-access-t2tg4\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380431 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.380502 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d66ae12-bf74-43a9-98d3-c5c19b097ed7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.381564 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d986148-8fca-429d-a235-1d41a3238710" containerID="f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b" exitCode=0 Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.381633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerDied","Data":"f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b"} Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383129 4792 generic.go:334] "Generic (PLEG): container finished" podID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerID="f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" exitCode=0 Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383214 4792 generic.go:334] "Generic (PLEG): container finished" podID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerID="ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" exitCode=143 Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerDied","Data":"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5"} Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383443 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerDied","Data":"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0"} Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1d66ae12-bf74-43a9-98d3-c5c19b097ed7","Type":"ContainerDied","Data":"9aa7c15109006a533cef3cbe55780fa75cd591f2ab947b23023e14ee41136a43"} Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383676 4792 scope.go:117] "RemoveContainer" containerID="f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.383901 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.391697 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerStarted","Data":"d19c3da3a434f65c4f6b895e49aa1db84eee32c67d4715f7d0f96e02bc86308b"} Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.434465 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.441183 4792 scope.go:117] "RemoveContainer" containerID="ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.448410 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.497611 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58f4767d9c-gk2k8"] Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.537464 4792 scope.go:117] "RemoveContainer" containerID="f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" Feb 16 21:59:02 crc kubenswrapper[4792]: E0216 21:59:02.539004 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5\": container with ID starting with f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5 not found: ID does not exist" containerID="f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.539042 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5"} err="failed to get container status \"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5\": rpc error: code = NotFound desc = could not find container \"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5\": container with ID starting with f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5 not found: ID does not exist" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.539066 4792 scope.go:117] "RemoveContainer" containerID="ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" Feb 16 21:59:02 crc kubenswrapper[4792]: E0216 21:59:02.539661 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0\": container with ID starting with ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0 not found: ID does not exist" containerID="ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.539715 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0"} err="failed to get container status \"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0\": rpc error: code = NotFound desc = could not find container \"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0\": container with ID starting with ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0 not found: ID does not exist" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.539732 4792 scope.go:117] "RemoveContainer" containerID="f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.539999 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5"} err="failed to get container status \"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5\": rpc error: code = NotFound desc = could not find container \"f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5\": container with ID starting with f239ba7d52ae6127399720e4883ad9ef2f4509df7e46101a82fe965eb80a92a5 not found: ID does not exist" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.540025 4792 scope.go:117] "RemoveContainer" containerID="ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.541096 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0"} err="failed to get container status \"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0\": rpc error: code = NotFound desc = could not find container \"ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0\": container with ID starting with ffa39e9e8f6609fc37c5ede9eb1bfc2def01b987eb78dbefb8b13c4f0cbb68c0 not found: ID does not exist" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.541181 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:59:02 crc kubenswrapper[4792]: E0216 21:59:02.542161 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.542179 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api" Feb 16 21:59:02 crc kubenswrapper[4792]: E0216 21:59:02.542204 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api-log" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.542211 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api-log" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.542566 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api-log" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.542589 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" containerName="cinder-api" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.544415 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.551936 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.553879 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.554060 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.567315 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.696929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0993d32-4203-4fa0-a527-917981f0348d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697378 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0993d32-4203-4fa0-a527-917981f0348d-logs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697464 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tws5\" (UniqueName: \"kubernetes.io/projected/d0993d32-4203-4fa0-a527-917981f0348d-kube-api-access-5tws5\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697698 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.697824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-scripts\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0993d32-4203-4fa0-a527-917981f0348d-logs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tws5\" (UniqueName: \"kubernetes.io/projected/d0993d32-4203-4fa0-a527-917981f0348d-kube-api-access-5tws5\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799534 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-scripts\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799739 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0993d32-4203-4fa0-a527-917981f0348d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799808 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.799882 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0993d32-4203-4fa0-a527-917981f0348d-logs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.800675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0993d32-4203-4fa0-a527-917981f0348d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.806360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.806421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.810951 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.812100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.812799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-scripts\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.818736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0993d32-4203-4fa0-a527-917981f0348d-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.830316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tws5\" (UniqueName: \"kubernetes.io/projected/d0993d32-4203-4fa0-a527-917981f0348d-kube-api-access-5tws5\") pod \"cinder-api-0\" (UID: \"d0993d32-4203-4fa0-a527-917981f0348d\") " pod="openstack/cinder-api-0" Feb 16 21:59:02 crc kubenswrapper[4792]: I0216 21:59:02.873509 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.383236 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.385817 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7686fdb8c5-qzv2j" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9696/\": dial tcp 10.217.0.193:9696: connect: connection refused" Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.405691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0993d32-4203-4fa0-a527-917981f0348d","Type":"ContainerStarted","Data":"d06e867effe751f5a38f674eb20cef609cffdbacadc5ca4d5020a74af2a04a55"} Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.408042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerStarted","Data":"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c"} Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.415363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f4767d9c-gk2k8" event={"ID":"1a645a10-4e7b-42ed-a764-9cafab1d6086","Type":"ContainerStarted","Data":"a003db2de30059952aafe86a429485ad70b682f9d95e02b2253a6b7c069387b3"} Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.415399 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f4767d9c-gk2k8" event={"ID":"1a645a10-4e7b-42ed-a764-9cafab1d6086","Type":"ContainerStarted","Data":"d9773535cabace168f3375a946698cc8aa53e7811452b7e9364f9af9cbde1f79"} Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.415411 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58f4767d9c-gk2k8" event={"ID":"1a645a10-4e7b-42ed-a764-9cafab1d6086","Type":"ContainerStarted","Data":"edd9a2ba9ecca75061e0bbba7b1056f000337b70eb41b31ac1a7b3531d30c97b"} Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.415445 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:03 crc kubenswrapper[4792]: I0216 21:59:03.453316 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58f4767d9c-gk2k8" podStartSLOduration=2.453288409 podStartE2EDuration="2.453288409s" podCreationTimestamp="2026-02-16 21:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:03.434389697 +0000 UTC m=+1276.087668598" watchObservedRunningTime="2026-02-16 21:59:03.453288409 +0000 UTC m=+1276.106567300" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.054046 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d66ae12-bf74-43a9-98d3-c5c19b097ed7" path="/var/lib/kubelet/pods/1d66ae12-bf74-43a9-98d3-c5c19b097ed7/volumes" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.157354 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b44888c4-9ndb2" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:35778->10.217.0.202:9311: read: connection reset by peer" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.157390 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84b44888c4-9ndb2" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.202:9311/healthcheck\": read tcp 10.217.0.2:35792->10.217.0.202:9311: read: connection reset by peer" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.436798 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0993d32-4203-4fa0-a527-917981f0348d","Type":"ContainerStarted","Data":"fa6a8a9689f6196767210fbbf1e4aad5417e601f6124369ed90a9c616ac08602"} Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.442324 4792 generic.go:334] "Generic (PLEG): container finished" podID="29861710-f00a-4c5b-9e57-e116983057ee" containerID="30d6ce3d7fa4be36ddc6cab1786a9e98437360df403a74ea7c307dfdbb3c02c6" exitCode=0 Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.442370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerDied","Data":"30d6ce3d7fa4be36ddc6cab1786a9e98437360df403a74ea7c307dfdbb3c02c6"} Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.449128 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerStarted","Data":"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f"} Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.449159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerStarted","Data":"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891"} Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.668112 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.750017 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ckgm\" (UniqueName: \"kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm\") pod \"29861710-f00a-4c5b-9e57-e116983057ee\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.750097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data\") pod \"29861710-f00a-4c5b-9e57-e116983057ee\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.750142 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs\") pod \"29861710-f00a-4c5b-9e57-e116983057ee\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.750186 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom\") pod \"29861710-f00a-4c5b-9e57-e116983057ee\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.750227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle\") pod \"29861710-f00a-4c5b-9e57-e116983057ee\" (UID: \"29861710-f00a-4c5b-9e57-e116983057ee\") " Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.752089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs" (OuterVolumeSpecName: "logs") pod "29861710-f00a-4c5b-9e57-e116983057ee" (UID: "29861710-f00a-4c5b-9e57-e116983057ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.756109 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29861710-f00a-4c5b-9e57-e116983057ee" (UID: "29861710-f00a-4c5b-9e57-e116983057ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.781891 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm" (OuterVolumeSpecName: "kube-api-access-8ckgm") pod "29861710-f00a-4c5b-9e57-e116983057ee" (UID: "29861710-f00a-4c5b-9e57-e116983057ee"). InnerVolumeSpecName "kube-api-access-8ckgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.787700 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29861710-f00a-4c5b-9e57-e116983057ee" (UID: "29861710-f00a-4c5b-9e57-e116983057ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.819100 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data" (OuterVolumeSpecName: "config-data") pod "29861710-f00a-4c5b-9e57-e116983057ee" (UID: "29861710-f00a-4c5b-9e57-e116983057ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.853134 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.853169 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ckgm\" (UniqueName: \"kubernetes.io/projected/29861710-f00a-4c5b-9e57-e116983057ee-kube-api-access-8ckgm\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.853185 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.853197 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29861710-f00a-4c5b-9e57-e116983057ee-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:04 crc kubenswrapper[4792]: I0216 21:59:04.853209 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29861710-f00a-4c5b-9e57-e116983057ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.488512 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84b44888c4-9ndb2" event={"ID":"29861710-f00a-4c5b-9e57-e116983057ee","Type":"ContainerDied","Data":"dc1cb62bce339a8f712ecc149a1a3812e684e08f1b76e0d9a428f0d662f6f812"} Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.488876 4792 scope.go:117] "RemoveContainer" containerID="30d6ce3d7fa4be36ddc6cab1786a9e98437360df403a74ea7c307dfdbb3c02c6" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.489053 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84b44888c4-9ndb2" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.491706 4792 generic.go:334] "Generic (PLEG): container finished" podID="8d986148-8fca-429d-a235-1d41a3238710" containerID="7b5c510268f2f3057462dc91616df0420871dfb753c6a50bad8fb3ec29ce3bc2" exitCode=0 Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.491792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerDied","Data":"7b5c510268f2f3057462dc91616df0420871dfb753c6a50bad8fb3ec29ce3bc2"} Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.501791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0993d32-4203-4fa0-a527-917981f0348d","Type":"ContainerStarted","Data":"5628b4c174a0c727b80d5cf944918efa2181b1476d59b32410cc2014766e1a37"} Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.502028 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.543049 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.543023259 podStartE2EDuration="3.543023259s" podCreationTimestamp="2026-02-16 21:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:05.531715513 +0000 UTC m=+1278.184994414" watchObservedRunningTime="2026-02-16 21:59:05.543023259 +0000 UTC m=+1278.196302140" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.557961 4792 scope.go:117] "RemoveContainer" containerID="7dc14f19510a407f74bbcf930bdc45733ef59da96982c01d1a1a7222496e436f" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.569947 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.583480 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84b44888c4-9ndb2"] Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.617008 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7j58\" (UniqueName: \"kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805255 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805365 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805411 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.805472 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs\") pod \"8d986148-8fca-429d-a235-1d41a3238710\" (UID: \"8d986148-8fca-429d-a235-1d41a3238710\") " Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.812039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58" (OuterVolumeSpecName: "kube-api-access-s7j58") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "kube-api-access-s7j58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.813511 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.885392 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.891776 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.899648 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config" (OuterVolumeSpecName: "config") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.902725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909363 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909400 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909409 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909418 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909429 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.909438 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7j58\" (UniqueName: \"kubernetes.io/projected/8d986148-8fca-429d-a235-1d41a3238710-kube-api-access-s7j58\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:05 crc kubenswrapper[4792]: I0216 21:59:05.942740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8d986148-8fca-429d-a235-1d41a3238710" (UID: "8d986148-8fca-429d-a235-1d41a3238710"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.011195 4792 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d986148-8fca-429d-a235-1d41a3238710-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.044226 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29861710-f00a-4c5b-9e57-e116983057ee" path="/var/lib/kubelet/pods/29861710-f00a-4c5b-9e57-e116983057ee/volumes" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.525494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerStarted","Data":"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b"} Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.526015 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.527803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7686fdb8c5-qzv2j" event={"ID":"8d986148-8fca-429d-a235-1d41a3238710","Type":"ContainerDied","Data":"a44927403482cafc74f9be989c9f26c4c70610b518983290d3ed85e07dc7610e"} Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.527862 4792 scope.go:117] "RemoveContainer" containerID="f0da8de8b88d2869adae058213216c56c4cbbe8bdb216d9a2208bc115502388b" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.527874 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7686fdb8c5-qzv2j" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.563144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9790074669999997 podStartE2EDuration="6.563123149s" podCreationTimestamp="2026-02-16 21:59:00 +0000 UTC" firstStartedPulling="2026-02-16 21:59:01.864190615 +0000 UTC m=+1274.517469506" lastFinishedPulling="2026-02-16 21:59:05.448306297 +0000 UTC m=+1278.101585188" observedRunningTime="2026-02-16 21:59:06.554788543 +0000 UTC m=+1279.208067454" watchObservedRunningTime="2026-02-16 21:59:06.563123149 +0000 UTC m=+1279.216402070" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.572699 4792 scope.go:117] "RemoveContainer" containerID="7b5c510268f2f3057462dc91616df0420871dfb753c6a50bad8fb3ec29ce3bc2" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.588975 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.603549 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7686fdb8c5-qzv2j"] Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.817797 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.893317 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.893533 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-5f244" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="dnsmasq-dns" containerID="cri-o://6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1" gracePeriod=10 Feb 16 21:59:06 crc kubenswrapper[4792]: I0216 21:59:06.955058 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.007263 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.448778 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542514 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jffmn\" (UniqueName: \"kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542680 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542740 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542779 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.542806 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config\") pod \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\" (UID: \"6d33c31b-a60a-4f1e-bdf0-108837e3449c\") " Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.548716 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn" (OuterVolumeSpecName: "kube-api-access-jffmn") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "kube-api-access-jffmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.566041 4792 generic.go:334] "Generic (PLEG): container finished" podID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerID="6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1" exitCode=0 Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.566099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-5f244" event={"ID":"6d33c31b-a60a-4f1e-bdf0-108837e3449c","Type":"ContainerDied","Data":"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1"} Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.566181 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-5f244" event={"ID":"6d33c31b-a60a-4f1e-bdf0-108837e3449c","Type":"ContainerDied","Data":"dfb552a43c0ba77763d88e7d4da07374a18d507cda1dc261294e6e33809e49a0"} Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.566129 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-5f244" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.566207 4792 scope.go:117] "RemoveContainer" containerID="6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.568586 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="cinder-scheduler" containerID="cri-o://ce7b1af64ba62a7f4ed42c58e4ca2ff5e185862fa80f4322aeb2aa01e29c07ea" gracePeriod=30 Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.569243 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="probe" containerID="cri-o://c53d260a5c073f4a966db2b8fd05bdb2e317e8fd67c94e33f585f51cd2662ca1" gracePeriod=30 Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.622695 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.634672 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.645346 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jffmn\" (UniqueName: \"kubernetes.io/projected/6d33c31b-a60a-4f1e-bdf0-108837e3449c-kube-api-access-jffmn\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.645388 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.645437 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.645759 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.669299 4792 scope.go:117] "RemoveContainer" containerID="39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.671997 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.675168 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config" (OuterVolumeSpecName: "config") pod "6d33c31b-a60a-4f1e-bdf0-108837e3449c" (UID: "6d33c31b-a60a-4f1e-bdf0-108837e3449c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.699084 4792 scope.go:117] "RemoveContainer" containerID="6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1" Feb 16 21:59:07 crc kubenswrapper[4792]: E0216 21:59:07.699550 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1\": container with ID starting with 6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1 not found: ID does not exist" containerID="6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.699576 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1"} err="failed to get container status \"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1\": rpc error: code = NotFound desc = could not find container \"6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1\": container with ID starting with 6cd533c2fd24109e0a264d5b68a2e4f36939bc0c50ad886f5c41d1353bef37e1 not found: ID does not exist" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.699609 4792 scope.go:117] "RemoveContainer" containerID="39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb" Feb 16 21:59:07 crc kubenswrapper[4792]: E0216 21:59:07.699976 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb\": container with ID starting with 39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb not found: ID does not exist" containerID="39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.700038 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb"} err="failed to get container status \"39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb\": rpc error: code = NotFound desc = could not find container \"39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb\": container with ID starting with 39ab503381f5e36a6f931e15461cda89efcd8346d45063aa9dac1fc326be60eb not found: ID does not exist" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.747712 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.747758 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.747774 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d33c31b-a60a-4f1e-bdf0-108837e3449c-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.900239 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:59:07 crc kubenswrapper[4792]: I0216 21:59:07.910851 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-5f244"] Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.044834 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" path="/var/lib/kubelet/pods/6d33c31b-a60a-4f1e-bdf0-108837e3449c/volumes" Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.045581 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d986148-8fca-429d-a235-1d41a3238710" path="/var/lib/kubelet/pods/8d986148-8fca-429d-a235-1d41a3238710/volumes" Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.584333 4792 generic.go:334] "Generic (PLEG): container finished" podID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerID="c53d260a5c073f4a966db2b8fd05bdb2e317e8fd67c94e33f585f51cd2662ca1" exitCode=0 Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.584605 4792 generic.go:334] "Generic (PLEG): container finished" podID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerID="ce7b1af64ba62a7f4ed42c58e4ca2ff5e185862fa80f4322aeb2aa01e29c07ea" exitCode=0 Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.584534 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerDied","Data":"c53d260a5c073f4a966db2b8fd05bdb2e317e8fd67c94e33f585f51cd2662ca1"} Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.584671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerDied","Data":"ce7b1af64ba62a7f4ed42c58e4ca2ff5e185862fa80f4322aeb2aa01e29c07ea"} Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.973994 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:59:08 crc kubenswrapper[4792]: I0216 21:59:08.997941 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.029774 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.077901 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.077971 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.078030 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.078194 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.078339 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dklws\" (UniqueName: \"kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.078372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle\") pod \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\" (UID: \"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646\") " Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.078337 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.079124 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.085146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.098865 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts" (OuterVolumeSpecName: "scripts") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.098936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws" (OuterVolumeSpecName: "kube-api-access-dklws") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "kube-api-access-dklws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.182277 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dklws\" (UniqueName: \"kubernetes.io/projected/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-kube-api-access-dklws\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.182305 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.182315 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.190085 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.191855 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data" (OuterVolumeSpecName: "config-data") pod "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" (UID: "e5b8ab5c-ca2e-44a3-9b82-9b9b99496646"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.285090 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.285122 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.318715 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9686f857b-mxcsr"] Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319164 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="probe" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319182 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="probe" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319197 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-api" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319205 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-api" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319219 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="init" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319226 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="init" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319251 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319260 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319272 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319278 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319286 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="dnsmasq-dns" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319291 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="dnsmasq-dns" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319309 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="cinder-scheduler" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319315 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="cinder-scheduler" Feb 16 21:59:09 crc kubenswrapper[4792]: E0216 21:59:09.319328 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319334 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319522 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319533 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33c31b-a60a-4f1e-bdf0-108837e3449c" containerName="dnsmasq-dns" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319542 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-httpd" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319557 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="cinder-scheduler" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319569 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" containerName="probe" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319579 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d986148-8fca-429d-a235-1d41a3238710" containerName="neutron-api" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.319608 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="29861710-f00a-4c5b-9e57-e116983057ee" containerName="barbican-api-log" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.320717 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.365677 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9686f857b-mxcsr"] Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg2w\" (UniqueName: \"kubernetes.io/projected/616f13af-2b9a-40da-a031-aa421f1ff745-kube-api-access-gdg2w\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-config-data\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386403 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-scripts\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-internal-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-public-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-combined-ca-bundle\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.386760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/616f13af-2b9a-40da-a031-aa421f1ff745-logs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-config-data\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-scripts\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-internal-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488708 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-public-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-combined-ca-bundle\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488811 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/616f13af-2b9a-40da-a031-aa421f1ff745-logs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.488882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg2w\" (UniqueName: \"kubernetes.io/projected/616f13af-2b9a-40da-a031-aa421f1ff745-kube-api-access-gdg2w\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.489848 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/616f13af-2b9a-40da-a031-aa421f1ff745-logs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.491764 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-scripts\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.493166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-combined-ca-bundle\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.493699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-public-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.505823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-config-data\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.505952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/616f13af-2b9a-40da-a031-aa421f1ff745-internal-tls-certs\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.508524 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg2w\" (UniqueName: \"kubernetes.io/projected/616f13af-2b9a-40da-a031-aa421f1ff745-kube-api-access-gdg2w\") pod \"placement-9686f857b-mxcsr\" (UID: \"616f13af-2b9a-40da-a031-aa421f1ff745\") " pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.604996 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.605663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5b8ab5c-ca2e-44a3-9b82-9b9b99496646","Type":"ContainerDied","Data":"d1dd97f88d0984a0528f81739405f92d29a428257d1b9a2f9267b497e1f38eff"} Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.605727 4792 scope.go:117] "RemoveContainer" containerID="c53d260a5c073f4a966db2b8fd05bdb2e317e8fd67c94e33f585f51cd2662ca1" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.652784 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.662653 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.669820 4792 scope.go:117] "RemoveContainer" containerID="ce7b1af64ba62a7f4ed42c58e4ca2ff5e185862fa80f4322aeb2aa01e29c07ea" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.680925 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.711075 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.713748 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.718553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.721947 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.801888 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.801925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmwx\" (UniqueName: \"kubernetes.io/projected/b1584d19-127a-4d77-8e66-3096a62ae789-kube-api-access-kqmwx\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.801960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-scripts\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.802061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.802088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.802136 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1584d19-127a-4d77-8e66-3096a62ae789-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1584d19-127a-4d77-8e66-3096a62ae789-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmwx\" (UniqueName: \"kubernetes.io/projected/b1584d19-127a-4d77-8e66-3096a62ae789-kube-api-access-kqmwx\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-scripts\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907269 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.907872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1584d19-127a-4d77-8e66-3096a62ae789-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.913644 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.914642 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-scripts\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.915321 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.917093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1584d19-127a-4d77-8e66-3096a62ae789-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:09 crc kubenswrapper[4792]: I0216 21:59:09.923940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmwx\" (UniqueName: \"kubernetes.io/projected/b1584d19-127a-4d77-8e66-3096a62ae789-kube-api-access-kqmwx\") pod \"cinder-scheduler-0\" (UID: \"b1584d19-127a-4d77-8e66-3096a62ae789\") " pod="openstack/cinder-scheduler-0" Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.040522 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b8ab5c-ca2e-44a3-9b82-9b9b99496646" path="/var/lib/kubelet/pods/e5b8ab5c-ca2e-44a3-9b82-9b9b99496646/volumes" Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.044061 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.163179 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9686f857b-mxcsr"] Feb 16 21:59:10 crc kubenswrapper[4792]: W0216 21:59:10.196883 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616f13af_2b9a_40da_a031_aa421f1ff745.slice/crio-b144331046ec69040ed0b782b8f1aed706a4a93e43e23d9d38145a62a5d905a8 WatchSource:0}: Error finding container b144331046ec69040ed0b782b8f1aed706a4a93e43e23d9d38145a62a5d905a8: Status 404 returned error can't find the container with id b144331046ec69040ed0b782b8f1aed706a4a93e43e23d9d38145a62a5d905a8 Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.615236 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9686f857b-mxcsr" event={"ID":"616f13af-2b9a-40da-a031-aa421f1ff745","Type":"ContainerStarted","Data":"4d221658ae40b62ac9c9a8b76ed3ed2bbddae2eea134ff63583946992abd720e"} Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.615767 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9686f857b-mxcsr" event={"ID":"616f13af-2b9a-40da-a031-aa421f1ff745","Type":"ContainerStarted","Data":"b144331046ec69040ed0b782b8f1aed706a4a93e43e23d9d38145a62a5d905a8"} Feb 16 21:59:10 crc kubenswrapper[4792]: I0216 21:59:10.628017 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 21:59:10 crc kubenswrapper[4792]: W0216 21:59:10.628836 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1584d19_127a_4d77_8e66_3096a62ae789.slice/crio-04c3bbd57b6345a1388faa9d90e4a565474a482fea56b4f28967914257f7ea1e WatchSource:0}: Error finding container 04c3bbd57b6345a1388faa9d90e4a565474a482fea56b4f28967914257f7ea1e: Status 404 returned error can't find the container with id 04c3bbd57b6345a1388faa9d90e4a565474a482fea56b4f28967914257f7ea1e Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.633063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9686f857b-mxcsr" event={"ID":"616f13af-2b9a-40da-a031-aa421f1ff745","Type":"ContainerStarted","Data":"fe3fde2d223c7941351ae1f95deed6682e7722884510ce992190638b522c9677"} Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.633589 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.633619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.635045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b1584d19-127a-4d77-8e66-3096a62ae789","Type":"ContainerStarted","Data":"d2c86db08ef9bd123f5ec1bcefa0d9664e9e63860f761d7f17adf1af45f77967"} Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.635072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b1584d19-127a-4d77-8e66-3096a62ae789","Type":"ContainerStarted","Data":"04c3bbd57b6345a1388faa9d90e4a565474a482fea56b4f28967914257f7ea1e"} Feb 16 21:59:11 crc kubenswrapper[4792]: I0216 21:59:11.661553 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9686f857b-mxcsr" podStartSLOduration=2.661533231 podStartE2EDuration="2.661533231s" podCreationTimestamp="2026-02-16 21:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:11.658441128 +0000 UTC m=+1284.311720019" watchObservedRunningTime="2026-02-16 21:59:11.661533231 +0000 UTC m=+1284.314812132" Feb 16 21:59:12 crc kubenswrapper[4792]: I0216 21:59:12.649861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b1584d19-127a-4d77-8e66-3096a62ae789","Type":"ContainerStarted","Data":"961fb67a712b4770693809a1529b942eef272956b22a4f384b769f97314a8df8"} Feb 16 21:59:12 crc kubenswrapper[4792]: I0216 21:59:12.678857 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.678838416 podStartE2EDuration="3.678838416s" podCreationTimestamp="2026-02-16 21:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:12.672331359 +0000 UTC m=+1285.325610250" watchObservedRunningTime="2026-02-16 21:59:12.678838416 +0000 UTC m=+1285.332117307" Feb 16 21:59:13 crc kubenswrapper[4792]: I0216 21:59:13.903680 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5978f67fb4-lxqn8" Feb 16 21:59:14 crc kubenswrapper[4792]: I0216 21:59:14.863088 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 21:59:15 crc kubenswrapper[4792]: I0216 21:59:15.044355 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.768969 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d7f78dd75-dlmv8"] Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.772665 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.776580 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.776872 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.777046 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.782689 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7f78dd75-dlmv8"] Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.945373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-run-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.945426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-internal-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.945580 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-combined-ca-bundle\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.945713 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-log-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.945792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-config-data\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.946105 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-public-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.946271 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gxz\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-kube-api-access-x5gxz\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:17 crc kubenswrapper[4792]: I0216 21:59:17.946322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-etc-swift\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-public-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gxz\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-kube-api-access-x5gxz\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-etc-swift\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048749 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-run-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-internal-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-combined-ca-bundle\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048885 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-log-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.048921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-config-data\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.052392 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-log-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.052400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/633c7466-7045-47d2-906d-0d9881501baa-run-httpd\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.057371 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-combined-ca-bundle\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.060809 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-config-data\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.060994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-internal-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.061075 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/633c7466-7045-47d2-906d-0d9881501baa-public-tls-certs\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.062178 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-etc-swift\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.073686 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gxz\" (UniqueName: \"kubernetes.io/projected/633c7466-7045-47d2-906d-0d9881501baa-kube-api-access-x5gxz\") pod \"swift-proxy-6d7f78dd75-dlmv8\" (UID: \"633c7466-7045-47d2-906d-0d9881501baa\") " pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.092997 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.646708 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7f78dd75-dlmv8"] Feb 16 21:59:18 crc kubenswrapper[4792]: W0216 21:59:18.652990 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod633c7466_7045_47d2_906d_0d9881501baa.slice/crio-dca8495f6d9cd5e450229a8a84b5c05c6fa35528e8dd267c89e1445de572502d WatchSource:0}: Error finding container dca8495f6d9cd5e450229a8a84b5c05c6fa35528e8dd267c89e1445de572502d: Status 404 returned error can't find the container with id dca8495f6d9cd5e450229a8a84b5c05c6fa35528e8dd267c89e1445de572502d Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.745991 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" event={"ID":"633c7466-7045-47d2-906d-0d9881501baa","Type":"ContainerStarted","Data":"dca8495f6d9cd5e450229a8a84b5c05c6fa35528e8dd267c89e1445de572502d"} Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.891009 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.894125 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.898800 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.902911 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.915897 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vrgk4" Feb 16 21:59:18 crc kubenswrapper[4792]: I0216 21:59:18.917657 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.081226 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.081468 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.081522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.081574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhlnw\" (UniqueName: \"kubernetes.io/projected/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-kube-api-access-mhlnw\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.183666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.183768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.183817 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhlnw\" (UniqueName: \"kubernetes.io/projected/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-kube-api-access-mhlnw\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.184025 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.187456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.190107 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.199277 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.210231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhlnw\" (UniqueName: \"kubernetes.io/projected/7a688f5f-10e0-42eb-863d-c8f919b2e3f5-kube-api-access-mhlnw\") pod \"openstackclient\" (UID: \"7a688f5f-10e0-42eb-863d-c8f919b2e3f5\") " pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.236019 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.764362 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.771805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" event={"ID":"633c7466-7045-47d2-906d-0d9881501baa","Type":"ContainerStarted","Data":"f6515cb53c87b9c5dfe62a20f60cab6d04f5b2056ea80300e72b87ca13a99dfa"} Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.771847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" event={"ID":"633c7466-7045-47d2-906d-0d9881501baa","Type":"ContainerStarted","Data":"6da2e336aa603c418bade9a11885414e7a8cd2d93bd8718b7c4357f3bedb6680"} Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.773034 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.773083 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:19 crc kubenswrapper[4792]: I0216 21:59:19.798258 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" podStartSLOduration=2.798238748 podStartE2EDuration="2.798238748s" podCreationTimestamp="2026-02-16 21:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:19.791985018 +0000 UTC m=+1292.445264019" watchObservedRunningTime="2026-02-16 21:59:19.798238748 +0000 UTC m=+1292.451517639" Feb 16 21:59:20 crc kubenswrapper[4792]: I0216 21:59:20.276188 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 21:59:20 crc kubenswrapper[4792]: I0216 21:59:20.786893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7a688f5f-10e0-42eb-863d-c8f919b2e3f5","Type":"ContainerStarted","Data":"bccea88de0bc7a4ba8ecfa56d5651bf799ee0cd80ab848cf57e9c8cb62f64a45"} Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.096209 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.096493 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-central-agent" containerID="cri-o://93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c" gracePeriod=30 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.097204 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="proxy-httpd" containerID="cri-o://a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b" gracePeriod=30 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.097296 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="sg-core" containerID="cri-o://8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f" gracePeriod=30 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.097345 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-notification-agent" containerID="cri-o://790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891" gracePeriod=30 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.118295 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.207:3000/\": EOF" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.183663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.185549 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.205885 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6kpj6" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.206152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.206092 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.223333 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.284977 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.294821 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.319125 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.360334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.360452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.360501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzwpg\" (UniqueName: \"kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.360549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.372678 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.374319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.376577 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.386210 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.430502 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.432103 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.437625 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.446286 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462692 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmz9\" (UniqueName: \"kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462815 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462865 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.462929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.463040 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.463073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.463104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzwpg\" (UniqueName: \"kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.470085 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.470279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.471642 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.489538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzwpg\" (UniqueName: \"kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg\") pod \"heat-engine-75477f9d95-6ddxt\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567075 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567257 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567283 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb8z\" (UniqueName: \"kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567314 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxkm\" (UniqueName: \"kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567449 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567493 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567528 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567624 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmz9\" (UniqueName: \"kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567649 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.567684 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.568177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.568847 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.569396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.569781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.573006 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.594755 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmz9\" (UniqueName: \"kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9\") pod \"dnsmasq-dns-7d978555f9-g6wl7\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.598407 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.643968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.669820 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.669999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670039 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb8z\" (UniqueName: \"kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670223 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxkm\" (UniqueName: \"kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.670378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.676575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.680316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.682246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.684290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.685637 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.685803 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.691370 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb8z\" (UniqueName: \"kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z\") pod \"heat-api-745698795b-zlr5t\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.694687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxkm\" (UniqueName: \"kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm\") pod \"heat-cfnapi-664b984f-mtmnp\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.712033 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.762390 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.830491 4792 generic.go:334] "Generic (PLEG): container finished" podID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerID="a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b" exitCode=0 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.830748 4792 generic.go:334] "Generic (PLEG): container finished" podID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerID="8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f" exitCode=2 Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.830770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerDied","Data":"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b"} Feb 16 21:59:21 crc kubenswrapper[4792]: I0216 21:59:21.830794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerDied","Data":"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.175446 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:22 crc kubenswrapper[4792]: W0216 21:59:22.180065 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b5ce16_7f9b_48f2_9e59_7c08a88a84f8.slice/crio-0af364830efcaf6dcb666ecdc999f9e1531ce1c7d07f9cae23405b251cd7f09c WatchSource:0}: Error finding container 0af364830efcaf6dcb666ecdc999f9e1531ce1c7d07f9cae23405b251cd7f09c: Status 404 returned error can't find the container with id 0af364830efcaf6dcb666ecdc999f9e1531ce1c7d07f9cae23405b251cd7f09c Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.388127 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.531090 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.579167 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.844210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75477f9d95-6ddxt" event={"ID":"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8","Type":"ContainerStarted","Data":"ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.844261 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75477f9d95-6ddxt" event={"ID":"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8","Type":"ContainerStarted","Data":"0af364830efcaf6dcb666ecdc999f9e1531ce1c7d07f9cae23405b251cd7f09c"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.844310 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.848546 4792 generic.go:334] "Generic (PLEG): container finished" podID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerID="93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c" exitCode=0 Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.848655 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerDied","Data":"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.850311 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-745698795b-zlr5t" event={"ID":"d0209b0b-6ef4-4595-80ad-27f346d3bbe1","Type":"ContainerStarted","Data":"bf0d5686ab93371927156d6e8dff793ed8588bfd2cf191dc810596881c2e8436"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.853164 4792 generic.go:334] "Generic (PLEG): container finished" podID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerID="34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587" exitCode=0 Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.853233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" event={"ID":"ba3359c5-ae19-444d-ba61-8ec59d678b3e","Type":"ContainerDied","Data":"34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.853266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" event={"ID":"ba3359c5-ae19-444d-ba61-8ec59d678b3e","Type":"ContainerStarted","Data":"772838a0d44b2d90c0626f1d09119dcf849c9a6866f4a347239af6561454b4f2"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.856839 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-664b984f-mtmnp" event={"ID":"6ebd8871-a518-4c36-89af-cefd9a5835b8","Type":"ContainerStarted","Data":"09d27391310c9d4da7b315d10ddc537f8476fa9acba6104b12ae4a919369c515"} Feb 16 21:59:22 crc kubenswrapper[4792]: I0216 21:59:22.873687 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-75477f9d95-6ddxt" podStartSLOduration=1.873666596 podStartE2EDuration="1.873666596s" podCreationTimestamp="2026-02-16 21:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:22.864478457 +0000 UTC m=+1295.517757358" watchObservedRunningTime="2026-02-16 21:59:22.873666596 +0000 UTC m=+1295.526945487" Feb 16 21:59:23 crc kubenswrapper[4792]: I0216 21:59:23.110450 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:23 crc kubenswrapper[4792]: I0216 21:59:23.892794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" event={"ID":"ba3359c5-ae19-444d-ba61-8ec59d678b3e","Type":"ContainerStarted","Data":"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265"} Feb 16 21:59:23 crc kubenswrapper[4792]: I0216 21:59:23.893060 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:23 crc kubenswrapper[4792]: I0216 21:59:23.923222 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" podStartSLOduration=2.923199293 podStartE2EDuration="2.923199293s" podCreationTimestamp="2026-02-16 21:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:23.910666824 +0000 UTC m=+1296.563945735" watchObservedRunningTime="2026-02-16 21:59:23.923199293 +0000 UTC m=+1296.576478174" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.542634 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615357 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615423 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615521 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615568 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615673 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm6v2\" (UniqueName: \"kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.615761 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data\") pod \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\" (UID: \"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7\") " Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.621010 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.623884 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.624684 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2" (OuterVolumeSpecName: "kube-api-access-xm6v2") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "kube-api-access-xm6v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.634039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts" (OuterVolumeSpecName: "scripts") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.674879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.709768 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717929 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717958 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717968 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717976 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717983 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.717993 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm6v2\" (UniqueName: \"kubernetes.io/projected/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-kube-api-access-xm6v2\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.752538 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data" (OuterVolumeSpecName: "config-data") pod "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" (UID: "440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.820386 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.931046 4792 generic.go:334] "Generic (PLEG): container finished" podID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerID="790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891" exitCode=0 Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.931138 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerDied","Data":"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891"} Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.931166 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7","Type":"ContainerDied","Data":"d19c3da3a434f65c4f6b895e49aa1db84eee32c67d4715f7d0f96e02bc86308b"} Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.931172 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.931182 4792 scope.go:117] "RemoveContainer" containerID="a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.946651 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-745698795b-zlr5t" event={"ID":"d0209b0b-6ef4-4595-80ad-27f346d3bbe1","Type":"ContainerStarted","Data":"4ac96c6c4f416fc908217311796ababed49263c9f5015ac1f809158960ff2e5a"} Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.946817 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.961133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-664b984f-mtmnp" event={"ID":"6ebd8871-a518-4c36-89af-cefd9a5835b8","Type":"ContainerStarted","Data":"39f0e39f59fbaa62eb8d053c3f37df585719ffd0c68cf325fcf2debf0fdfafc7"} Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.962619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:25 crc kubenswrapper[4792]: I0216 21:59:25.984291 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-745698795b-zlr5t" podStartSLOduration=2.443167472 podStartE2EDuration="4.984269464s" podCreationTimestamp="2026-02-16 21:59:21 +0000 UTC" firstStartedPulling="2026-02-16 21:59:22.615633935 +0000 UTC m=+1295.268912826" lastFinishedPulling="2026-02-16 21:59:25.156735937 +0000 UTC m=+1297.810014818" observedRunningTime="2026-02-16 21:59:25.970121462 +0000 UTC m=+1298.623400343" watchObservedRunningTime="2026-02-16 21:59:25.984269464 +0000 UTC m=+1298.637548355" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.097972 4792 scope.go:117] "RemoveContainer" containerID="8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.142081 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-664b984f-mtmnp" podStartSLOduration=2.622621423 podStartE2EDuration="5.142055068s" podCreationTimestamp="2026-02-16 21:59:21 +0000 UTC" firstStartedPulling="2026-02-16 21:59:22.632781599 +0000 UTC m=+1295.286060490" lastFinishedPulling="2026-02-16 21:59:25.152215244 +0000 UTC m=+1297.805494135" observedRunningTime="2026-02-16 21:59:26.00804507 +0000 UTC m=+1298.661323951" watchObservedRunningTime="2026-02-16 21:59:26.142055068 +0000 UTC m=+1298.795333959" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.189711 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.225033 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.242659 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.243354 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="proxy-httpd" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243370 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="proxy-httpd" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.243426 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-notification-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243434 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-notification-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.243446 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-central-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243452 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-central-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.243496 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="sg-core" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243504 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="sg-core" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243787 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-notification-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243810 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="proxy-httpd" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243825 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="sg-core" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.243856 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" containerName="ceilometer-central-agent" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.246302 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.250289 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.250738 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.261146 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.338932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw7c\" (UniqueName: \"kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339193 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.339349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.441577 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.441927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.441968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442093 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442201 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw7c\" (UniqueName: \"kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442363 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.442539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.447304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.447539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.448409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.449097 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.459150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw7c\" (UniqueName: \"kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c\") pod \"ceilometer-0\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.510458 4792 scope.go:117] "RemoveContainer" containerID="790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.577003 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.587906 4792 scope.go:117] "RemoveContainer" containerID="93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.642452 4792 scope.go:117] "RemoveContainer" containerID="a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.642975 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b\": container with ID starting with a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b not found: ID does not exist" containerID="a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.643003 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b"} err="failed to get container status \"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b\": rpc error: code = NotFound desc = could not find container \"a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b\": container with ID starting with a78f2f27f2b80d32ae3761629c58d41ac04f08fdf069b82f5fcce905911c3c3b not found: ID does not exist" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.643025 4792 scope.go:117] "RemoveContainer" containerID="8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.643453 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f\": container with ID starting with 8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f not found: ID does not exist" containerID="8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.643473 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f"} err="failed to get container status \"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f\": rpc error: code = NotFound desc = could not find container \"8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f\": container with ID starting with 8ce5ae1012a4cbe180b3494293ecd8a5d69ca3e3d125f5ed8b63c5b4dae1685f not found: ID does not exist" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.643486 4792 scope.go:117] "RemoveContainer" containerID="790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.643979 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891\": container with ID starting with 790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891 not found: ID does not exist" containerID="790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.644004 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891"} err="failed to get container status \"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891\": rpc error: code = NotFound desc = could not find container \"790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891\": container with ID starting with 790d5ece7e9f9455c272fc9075842b88c440a138daabfd546ded9830742a6891 not found: ID does not exist" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.644018 4792 scope.go:117] "RemoveContainer" containerID="93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c" Feb 16 21:59:26 crc kubenswrapper[4792]: E0216 21:59:26.646527 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c\": container with ID starting with 93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c not found: ID does not exist" containerID="93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c" Feb 16 21:59:26 crc kubenswrapper[4792]: I0216 21:59:26.646581 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c"} err="failed to get container status \"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c\": rpc error: code = NotFound desc = could not find container \"93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c\": container with ID starting with 93076b1f7b2fece1d65dce4dc124c85cd7d60a1d4e001b2d2e28cec5ab02a21c not found: ID does not exist" Feb 16 21:59:27 crc kubenswrapper[4792]: I0216 21:59:27.218980 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:27 crc kubenswrapper[4792]: I0216 21:59:27.992704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerStarted","Data":"a949e3739060aab1a9971c9fd73dc574641f3abf798f2336648f859cdb195caf"} Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.041890 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7" path="/var/lib/kubelet/pods/440dbd5c-eb5d-4d77-9ed9-82fce9d08ba7/volumes" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.104529 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.516989 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-dcdcd9bbc-f9nr2"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.518952 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.547111 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.548726 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.583281 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dcdcd9bbc-f9nr2"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.615118 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.656720 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.658217 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.687715 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.708100 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46pk\" (UniqueName: \"kubernetes.io/projected/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-kube-api-access-d46pk\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.708701 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data-custom\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.708734 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.708897 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-combined-ca-bundle\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.708952 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgwr\" (UniqueName: \"kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.709189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.709373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.709463 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811188 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811250 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ft6\" (UniqueName: \"kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-combined-ca-bundle\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgwr\" (UniqueName: \"kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46pk\" (UniqueName: \"kubernetes.io/projected/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-kube-api-access-d46pk\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.811657 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data-custom\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.821058 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.822744 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.827153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.827764 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.833953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-combined-ca-bundle\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.834706 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-config-data-custom\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.839976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgwr\" (UniqueName: \"kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr\") pod \"heat-api-678f746b4c-p48lm\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.842257 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46pk\" (UniqueName: \"kubernetes.io/projected/1ded7fb3-2456-4230-ace6-8786c6b9fd4e-kube-api-access-d46pk\") pod \"heat-engine-dcdcd9bbc-f9nr2\" (UID: \"1ded7fb3-2456-4230-ace6-8786c6b9fd4e\") " pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.849974 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.887408 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.914471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.914559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.914787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.914822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ft6\" (UniqueName: \"kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.921322 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.932029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.932639 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.941562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ft6\" (UniqueName: \"kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6\") pod \"heat-cfnapi-6bf864b9dc-xnqfg\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:28 crc kubenswrapper[4792]: I0216 21:59:28.981995 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:29 crc kubenswrapper[4792]: I0216 21:59:29.020179 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerStarted","Data":"a4e0a88102556775a5229575cd8e26ab16eb9aabf65bd19281ab3dcd22931b5e"} Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.427880 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.428503 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-745698795b-zlr5t" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" containerID="cri-o://4ac96c6c4f416fc908217311796ababed49263c9f5015ac1f809158960ff2e5a" gracePeriod=60 Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.442754 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.442958 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-664b984f-mtmnp" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" containerID="cri-o://39f0e39f59fbaa62eb8d053c3f37df585719ffd0c68cf325fcf2debf0fdfafc7" gracePeriod=60 Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.502649 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-789d9b5ffd-kgfxb"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.504194 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.515654 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fdc6c774c-p5p85"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.517182 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.525337 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-789d9b5ffd-kgfxb"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.538305 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-664b984f-mtmnp" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": EOF" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.538393 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.538393 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.540687 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.540874 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.616486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-internal-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.616888 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-public-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.616929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.616954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2ftc\" (UniqueName: \"kubernetes.io/projected/2b3f7c55-8515-478d-bd01-a18403a7116b-kube-api-access-t2ftc\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.616998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzrtp\" (UniqueName: \"kubernetes.io/projected/9159f697-7cfe-428b-8146-9fa0bab94592-kube-api-access-wzrtp\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617202 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-internal-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-combined-ca-bundle\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-combined-ca-bundle\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-public-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data-custom\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.617492 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data-custom\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.652629 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-745698795b-zlr5t" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": EOF" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726218 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzrtp\" (UniqueName: \"kubernetes.io/projected/9159f697-7cfe-428b-8146-9fa0bab94592-kube-api-access-wzrtp\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726349 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-internal-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-combined-ca-bundle\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-combined-ca-bundle\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-public-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data-custom\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data-custom\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726817 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-internal-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726885 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-public-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726917 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2ftc\" (UniqueName: \"kubernetes.io/projected/2b3f7c55-8515-478d-bd01-a18403a7116b-kube-api-access-t2ftc\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.726988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.737294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.742899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data-custom\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.744572 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-internal-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.745399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-internal-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.746164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-public-tls-certs\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.747178 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-config-data\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.776159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-combined-ca-bundle\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.776464 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-config-data-custom\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.776759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3f7c55-8515-478d-bd01-a18403a7116b-combined-ca-bundle\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.779228 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9159f697-7cfe-428b-8146-9fa0bab94592-public-tls-certs\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.780196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2ftc\" (UniqueName: \"kubernetes.io/projected/2b3f7c55-8515-478d-bd01-a18403a7116b-kube-api-access-t2ftc\") pod \"heat-cfnapi-fdc6c774c-p5p85\" (UID: \"2b3f7c55-8515-478d-bd01-a18403a7116b\") " pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.780562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzrtp\" (UniqueName: \"kubernetes.io/projected/9159f697-7cfe-428b-8146-9fa0bab94592-kube-api-access-wzrtp\") pod \"heat-api-789d9b5ffd-kgfxb\" (UID: \"9159f697-7cfe-428b-8146-9fa0bab94592\") " pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.794354 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fdc6c774c-p5p85"] Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.883166 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:30 crc kubenswrapper[4792]: I0216 21:59:30.930822 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.128839 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.533298 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.533578 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.646830 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.734640 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.734867 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="dnsmasq-dns" containerID="cri-o://92d738d240a38d7d5c41ed8b98c5cba777ecf76891f71e9d7d1bc73e6200c095" gracePeriod=10 Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.816369 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.205:5353: connect: connection refused" Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.843481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58f4767d9c-gk2k8" Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.941366 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.941680 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fc7bbfd9b-jkwk2" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-api" containerID="cri-o://91d8b01a9668051c525bff8feae20fb98fabb81992b60d2574e1f6824a51249a" gracePeriod=30 Feb 16 21:59:31 crc kubenswrapper[4792]: I0216 21:59:31.941817 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fc7bbfd9b-jkwk2" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-httpd" containerID="cri-o://dd2f06bb4ed8aad609227120c023177d23fdf403e5fec286afed063cd65345e4" gracePeriod=30 Feb 16 21:59:32 crc kubenswrapper[4792]: I0216 21:59:32.071684 4792 generic.go:334] "Generic (PLEG): container finished" podID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerID="92d738d240a38d7d5c41ed8b98c5cba777ecf76891f71e9d7d1bc73e6200c095" exitCode=0 Feb 16 21:59:32 crc kubenswrapper[4792]: I0216 21:59:32.071727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" event={"ID":"78e87464-f75c-47e0-b302-a98fe79d4f43","Type":"ContainerDied","Data":"92d738d240a38d7d5c41ed8b98c5cba777ecf76891f71e9d7d1bc73e6200c095"} Feb 16 21:59:33 crc kubenswrapper[4792]: I0216 21:59:33.086739 4792 generic.go:334] "Generic (PLEG): container finished" podID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerID="dd2f06bb4ed8aad609227120c023177d23fdf403e5fec286afed063cd65345e4" exitCode=0 Feb 16 21:59:33 crc kubenswrapper[4792]: I0216 21:59:33.087085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerDied","Data":"dd2f06bb4ed8aad609227120c023177d23fdf403e5fec286afed063cd65345e4"} Feb 16 21:59:35 crc kubenswrapper[4792]: I0216 21:59:35.791311 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:35 crc kubenswrapper[4792]: I0216 21:59:35.792743 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-log" containerID="cri-o://292a62c4357341975de7a60cb9ce980634c1fc9a1bba2ed88e7873810d1bcf82" gracePeriod=30 Feb 16 21:59:35 crc kubenswrapper[4792]: I0216 21:59:35.793037 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-httpd" containerID="cri-o://b502d9d3e57eb08d08035cf2fdac8cc8c7c7d30a9921b5fa533d216034a1a605" gracePeriod=30 Feb 16 21:59:35 crc kubenswrapper[4792]: I0216 21:59:35.967982 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-664b984f-mtmnp" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": read tcp 10.217.0.2:58144->10.217.0.216:8000: read: connection reset by peer" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.042834 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-745698795b-zlr5t" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": read tcp 10.217.0.2:58812->10.217.0.217:8004: read: connection reset by peer" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.135113 4792 generic.go:334] "Generic (PLEG): container finished" podID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerID="91d8b01a9668051c525bff8feae20fb98fabb81992b60d2574e1f6824a51249a" exitCode=0 Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.135187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerDied","Data":"91d8b01a9668051c525bff8feae20fb98fabb81992b60d2574e1f6824a51249a"} Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.136841 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerID="39f0e39f59fbaa62eb8d053c3f37df585719ffd0c68cf325fcf2debf0fdfafc7" exitCode=0 Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.136895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-664b984f-mtmnp" event={"ID":"6ebd8871-a518-4c36-89af-cefd9a5835b8","Type":"ContainerDied","Data":"39f0e39f59fbaa62eb8d053c3f37df585719ffd0c68cf325fcf2debf0fdfafc7"} Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.139027 4792 generic.go:334] "Generic (PLEG): container finished" podID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerID="292a62c4357341975de7a60cb9ce980634c1fc9a1bba2ed88e7873810d1bcf82" exitCode=143 Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.139069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerDied","Data":"292a62c4357341975de7a60cb9ce980634c1fc9a1bba2ed88e7873810d1bcf82"} Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.436250 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.529651 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.530208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.530565 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.531194 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxwlf\" (UniqueName: \"kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.531336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.531434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb\") pod \"78e87464-f75c-47e0-b302-a98fe79d4f43\" (UID: \"78e87464-f75c-47e0-b302-a98fe79d4f43\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.568813 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf" (OuterVolumeSpecName: "kube-api-access-zxwlf") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "kube-api-access-zxwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.634756 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxwlf\" (UniqueName: \"kubernetes.io/projected/78e87464-f75c-47e0-b302-a98fe79d4f43-kube-api-access-zxwlf\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.646823 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.650060 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.652421 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.661123 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.685612 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config" (OuterVolumeSpecName: "config") pod "78e87464-f75c-47e0-b302-a98fe79d4f43" (UID: "78e87464-f75c-47e0-b302-a98fe79d4f43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.740297 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.740328 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.740340 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.740349 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.740358 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78e87464-f75c-47e0-b302-a98fe79d4f43-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.859268 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.872450 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-789d9b5ffd-kgfxb"] Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.953685 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs\") pod \"1262ac7e-ff1e-40b4-be35-03a9314fef99\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.953819 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config\") pod \"1262ac7e-ff1e-40b4-be35-03a9314fef99\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.953868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle\") pod \"1262ac7e-ff1e-40b4-be35-03a9314fef99\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.953919 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qrmq\" (UniqueName: \"kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq\") pod \"1262ac7e-ff1e-40b4-be35-03a9314fef99\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.953950 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config\") pod \"1262ac7e-ff1e-40b4-be35-03a9314fef99\" (UID: \"1262ac7e-ff1e-40b4-be35-03a9314fef99\") " Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.983852 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1262ac7e-ff1e-40b4-be35-03a9314fef99" (UID: "1262ac7e-ff1e-40b4-be35-03a9314fef99"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:36 crc kubenswrapper[4792]: I0216 21:59:36.989270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq" (OuterVolumeSpecName: "kube-api-access-5qrmq") pod "1262ac7e-ff1e-40b4-be35-03a9314fef99" (UID: "1262ac7e-ff1e-40b4-be35-03a9314fef99"). InnerVolumeSpecName "kube-api-access-5qrmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.064112 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qrmq\" (UniqueName: \"kubernetes.io/projected/1262ac7e-ff1e-40b4-be35-03a9314fef99-kube-api-access-5qrmq\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.064168 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.101757 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1262ac7e-ff1e-40b4-be35-03a9314fef99" (UID: "1262ac7e-ff1e-40b4-be35-03a9314fef99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.115762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config" (OuterVolumeSpecName: "config") pod "1262ac7e-ff1e-40b4-be35-03a9314fef99" (UID: "1262ac7e-ff1e-40b4-be35-03a9314fef99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.118618 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1262ac7e-ff1e-40b4-be35-03a9314fef99" (UID: "1262ac7e-ff1e-40b4-be35-03a9314fef99"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.161944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fc7bbfd9b-jkwk2" event={"ID":"1262ac7e-ff1e-40b4-be35-03a9314fef99","Type":"ContainerDied","Data":"068bdaaa57629a6b3b04d1c0cb57a975ab928bf2548f0fc88971cfff99784e8e"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.161990 4792 scope.go:117] "RemoveContainer" containerID="dd2f06bb4ed8aad609227120c023177d23fdf403e5fec286afed063cd65345e4" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.162048 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fc7bbfd9b-jkwk2" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.166061 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-config\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.166098 4792 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.166109 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262ac7e-ff1e-40b4-be35-03a9314fef99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.169257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-789d9b5ffd-kgfxb" event={"ID":"9159f697-7cfe-428b-8146-9fa0bab94592","Type":"ContainerStarted","Data":"f7ec52927014bef7df13e5c7368ff219cdbb7d45524f889e5b087608bc207f02"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.171032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-664b984f-mtmnp" event={"ID":"6ebd8871-a518-4c36-89af-cefd9a5835b8","Type":"ContainerDied","Data":"09d27391310c9d4da7b315d10ddc537f8476fa9acba6104b12ae4a919369c515"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.171058 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d27391310c9d4da7b315d10ddc537f8476fa9acba6104b12ae4a919369c515" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.174067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" event={"ID":"78e87464-f75c-47e0-b302-a98fe79d4f43","Type":"ContainerDied","Data":"1801d7301bd3c0dd05c7cd28c62fff34be213c5c245bb82f2f0a2345f4e3801d"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.174162 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-j8dss" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.181839 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7a688f5f-10e0-42eb-863d-c8f919b2e3f5","Type":"ContainerStarted","Data":"953c356703bb27968ef8c436befaf5a580e29c628f2d0646fa7a9f30e850e9b4"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.214550 4792 generic.go:334] "Generic (PLEG): container finished" podID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerID="4ac96c6c4f416fc908217311796ababed49263c9f5015ac1f809158960ff2e5a" exitCode=0 Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.214671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-745698795b-zlr5t" event={"ID":"d0209b0b-6ef4-4595-80ad-27f346d3bbe1","Type":"ContainerDied","Data":"4ac96c6c4f416fc908217311796ababed49263c9f5015ac1f809158960ff2e5a"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.214707 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-745698795b-zlr5t" event={"ID":"d0209b0b-6ef4-4595-80ad-27f346d3bbe1","Type":"ContainerDied","Data":"bf0d5686ab93371927156d6e8dff793ed8588bfd2cf191dc810596881c2e8436"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.214723 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0d5686ab93371927156d6e8dff793ed8588bfd2cf191dc810596881c2e8436" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.220579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerStarted","Data":"bfd2fae14f45e46a6b547062ea302208192a9808c6e957ed34c6d659193cdc9d"} Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.222225 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.025526727 podStartE2EDuration="19.222211277s" podCreationTimestamp="2026-02-16 21:59:18 +0000 UTC" firstStartedPulling="2026-02-16 21:59:19.764089094 +0000 UTC m=+1292.417367985" lastFinishedPulling="2026-02-16 21:59:35.960773644 +0000 UTC m=+1308.614052535" observedRunningTime="2026-02-16 21:59:37.200306724 +0000 UTC m=+1309.853585635" watchObservedRunningTime="2026-02-16 21:59:37.222211277 +0000 UTC m=+1309.875490168" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.249491 4792 scope.go:117] "RemoveContainer" containerID="91d8b01a9668051c525bff8feae20fb98fabb81992b60d2574e1f6824a51249a" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.299916 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.320014 4792 scope.go:117] "RemoveContainer" containerID="92d738d240a38d7d5c41ed8b98c5cba777ecf76891f71e9d7d1bc73e6200c095" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.329132 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.377359 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fdc6c774c-p5p85"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.388397 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dcdcd9bbc-f9nr2"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.388948 4792 scope.go:117] "RemoveContainer" containerID="165021f89fd6e54d747800b5ccf191c979959895dc4beea875723407cb23715a" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.415668 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.426875 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-j8dss"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.447954 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") pod \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473508 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom\") pod \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473669 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb8z\" (UniqueName: \"kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z\") pod \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473725 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxxkm\" (UniqueName: \"kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm\") pod \"6ebd8871-a518-4c36-89af-cefd9a5835b8\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473761 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle\") pod \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473860 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom\") pod \"6ebd8871-a518-4c36-89af-cefd9a5835b8\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.473932 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data\") pod \"6ebd8871-a518-4c36-89af-cefd9a5835b8\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.474091 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle\") pod \"6ebd8871-a518-4c36-89af-cefd9a5835b8\" (UID: \"6ebd8871-a518-4c36-89af-cefd9a5835b8\") " Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.478280 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z" (OuterVolumeSpecName: "kube-api-access-5kb8z") pod "d0209b0b-6ef4-4595-80ad-27f346d3bbe1" (UID: "d0209b0b-6ef4-4595-80ad-27f346d3bbe1"). InnerVolumeSpecName "kube-api-access-5kb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.478677 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.479629 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d0209b0b-6ef4-4595-80ad-27f346d3bbe1" (UID: "d0209b0b-6ef4-4595-80ad-27f346d3bbe1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.483967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6ebd8871-a518-4c36-89af-cefd9a5835b8" (UID: "6ebd8871-a518-4c36-89af-cefd9a5835b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.494053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm" (OuterVolumeSpecName: "kube-api-access-zxxkm") pod "6ebd8871-a518-4c36-89af-cefd9a5835b8" (UID: "6ebd8871-a518-4c36-89af-cefd9a5835b8"). InnerVolumeSpecName "kube-api-access-zxxkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.496675 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.516654 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fc7bbfd9b-jkwk2"] Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.585231 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.585272 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb8z\" (UniqueName: \"kubernetes.io/projected/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-kube-api-access-5kb8z\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.585287 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxxkm\" (UniqueName: \"kubernetes.io/projected/6ebd8871-a518-4c36-89af-cefd9a5835b8-kube-api-access-zxxkm\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.585300 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.585353 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ebd8871-a518-4c36-89af-cefd9a5835b8" (UID: "6ebd8871-a518-4c36-89af-cefd9a5835b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: E0216 21:59:37.606622 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data podName:d0209b0b-6ef4-4595-80ad-27f346d3bbe1 nodeName:}" failed. No retries permitted until 2026-02-16 21:59:38.1065792 +0000 UTC m=+1310.759858081 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data") pod "d0209b0b-6ef4-4595-80ad-27f346d3bbe1" (UID: "d0209b0b-6ef4-4595-80ad-27f346d3bbe1") : error deleting /var/lib/kubelet/pods/d0209b0b-6ef4-4595-80ad-27f346d3bbe1/volume-subpaths: remove /var/lib/kubelet/pods/d0209b0b-6ef4-4595-80ad-27f346d3bbe1/volume-subpaths: no such file or directory Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.617346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0209b0b-6ef4-4595-80ad-27f346d3bbe1" (UID: "d0209b0b-6ef4-4595-80ad-27f346d3bbe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.621280 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data" (OuterVolumeSpecName: "config-data") pod "6ebd8871-a518-4c36-89af-cefd9a5835b8" (UID: "6ebd8871-a518-4c36-89af-cefd9a5835b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.689007 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.689321 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:37 crc kubenswrapper[4792]: I0216 21:59:37.689334 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd8871-a518-4c36-89af-cefd9a5835b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.051247 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" path="/var/lib/kubelet/pods/1262ac7e-ff1e-40b4-be35-03a9314fef99/volumes" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.052241 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" path="/var/lib/kubelet/pods/78e87464-f75c-47e0-b302-a98fe79d4f43/volumes" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.086117 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.086401 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-log" containerID="cri-o://80619daf70af9937b66b9b66ae6d92131204ed3a4e1011364083e1b29c0da5c8" gracePeriod=30 Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.086441 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-httpd" containerID="cri-o://2b3b19200f4b032f8178f3c40fbfe90c01154988f221c2db38d6aa55f60c917f" gracePeriod=30 Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.219367 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") pod \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\" (UID: \"d0209b0b-6ef4-4595-80ad-27f346d3bbe1\") " Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.261108 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" event={"ID":"2b3f7c55-8515-478d-bd01-a18403a7116b","Type":"ContainerStarted","Data":"65ee9746162534445ff4b31b714c0bda2632299b669026bb8cd6d65e113281ed"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.262617 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.262718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" event={"ID":"2b3f7c55-8515-478d-bd01-a18403a7116b","Type":"ContainerStarted","Data":"5a9b9689703b075dba9820e5d0bb7d7ddead5d72b81dfc8c648cd9fc25b4e537"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.285092 4792 generic.go:334] "Generic (PLEG): container finished" podID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerID="80619daf70af9937b66b9b66ae6d92131204ed3a4e1011364083e1b29c0da5c8" exitCode=143 Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.285565 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerDied","Data":"80619daf70af9937b66b9b66ae6d92131204ed3a4e1011364083e1b29c0da5c8"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.285921 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" podStartSLOduration=8.285899639 podStartE2EDuration="8.285899639s" podCreationTimestamp="2026-02-16 21:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:38.284732366 +0000 UTC m=+1310.938011257" watchObservedRunningTime="2026-02-16 21:59:38.285899639 +0000 UTC m=+1310.939178530" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.288639 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data" (OuterVolumeSpecName: "config-data") pod "d0209b0b-6ef4-4595-80ad-27f346d3bbe1" (UID: "d0209b0b-6ef4-4595-80ad-27f346d3bbe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.294959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" event={"ID":"a94eb231-cfd5-48bb-9b0e-4d15ce07695f","Type":"ContainerStarted","Data":"7e0cfd4bf2323b15be1963283a0a6deb08d28dc41dba1cfc29aee3f64d9292b6"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.300088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678f746b4c-p48lm" event={"ID":"38a645f0-cc32-41d9-9309-22cd86985b4f","Type":"ContainerStarted","Data":"59b1733af0ad6b6462bd0cf15a44b0b2ea9a98062f99a6575b8c6e60f93f071b"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.300347 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678f746b4c-p48lm" event={"ID":"38a645f0-cc32-41d9-9309-22cd86985b4f","Type":"ContainerStarted","Data":"d0f61a3a77405da44052a0d9381d9af50f163ad308695c3280c70a6c2d6e1f2d"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.300562 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.315055 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerStarted","Data":"8975476d18dcaaf942344134778975b5bbabbf88e3c8d06bde73a17de3a25a8f"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.325724 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-678f746b4c-p48lm" podStartSLOduration=10.325492824 podStartE2EDuration="10.325492824s" podCreationTimestamp="2026-02-16 21:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:38.322225061 +0000 UTC m=+1310.975503962" watchObservedRunningTime="2026-02-16 21:59:38.325492824 +0000 UTC m=+1310.978771715" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.330917 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-789d9b5ffd-kgfxb" event={"ID":"9159f697-7cfe-428b-8146-9fa0bab94592","Type":"ContainerStarted","Data":"ea6de25f4cd5e88c17203b05fae1232624bac4cb7aba23ae696e5c5a66ebf9eb"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.331472 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.334020 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0209b0b-6ef4-4595-80ad-27f346d3bbe1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.343045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" event={"ID":"1ded7fb3-2456-4230-ace6-8786c6b9fd4e","Type":"ContainerStarted","Data":"495b7d4c84eeff8105bd344449e3f69b54c3324236eec9e25e3676926553de9e"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.343085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" event={"ID":"1ded7fb3-2456-4230-ace6-8786c6b9fd4e","Type":"ContainerStarted","Data":"e478676121c9b0fdb4ee41c6de2f329b51335a225bfd3a7e30c15df84c32876c"} Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.343183 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.349631 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-745698795b-zlr5t" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.350114 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-664b984f-mtmnp" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.355542 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-789d9b5ffd-kgfxb" podStartSLOduration=8.355529796999999 podStartE2EDuration="8.355529797s" podCreationTimestamp="2026-02-16 21:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:38.354370195 +0000 UTC m=+1311.007649086" watchObservedRunningTime="2026-02-16 21:59:38.355529797 +0000 UTC m=+1311.008808688" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.451536 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" podStartSLOduration=10.451514816 podStartE2EDuration="10.451514816s" podCreationTimestamp="2026-02-16 21:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:38.398351035 +0000 UTC m=+1311.051629936" watchObservedRunningTime="2026-02-16 21:59:38.451514816 +0000 UTC m=+1311.104793707" Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.540038 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.566784 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-664b984f-mtmnp"] Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.597679 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:38 crc kubenswrapper[4792]: I0216 21:59:38.621874 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-745698795b-zlr5t"] Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.373159 4792 generic.go:334] "Generic (PLEG): container finished" podID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerID="b502d9d3e57eb08d08035cf2fdac8cc8c7c7d30a9921b5fa533d216034a1a605" exitCode=0 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.373313 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerDied","Data":"b502d9d3e57eb08d08035cf2fdac8cc8c7c7d30a9921b5fa533d216034a1a605"} Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.381695 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerStarted","Data":"4bf7be4cd12148aaa82602773408aad06f3cc5ec2f63f4030105ad1e9bec3df3"} Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.381887 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-central-agent" containerID="cri-o://a4e0a88102556775a5229575cd8e26ab16eb9aabf65bd19281ab3dcd22931b5e" gracePeriod=30 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.382174 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.382528 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="proxy-httpd" containerID="cri-o://4bf7be4cd12148aaa82602773408aad06f3cc5ec2f63f4030105ad1e9bec3df3" gracePeriod=30 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.382578 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="sg-core" containerID="cri-o://8975476d18dcaaf942344134778975b5bbabbf88e3c8d06bde73a17de3a25a8f" gracePeriod=30 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.382631 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-notification-agent" containerID="cri-o://bfd2fae14f45e46a6b547062ea302208192a9808c6e957ed34c6d659193cdc9d" gracePeriod=30 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.397044 4792 generic.go:334] "Generic (PLEG): container finished" podID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerID="76465ef6501b564c44f847dd800d4cc0cad4533bb4aef9ca43a9457976ffb2fb" exitCode=1 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.397155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" event={"ID":"a94eb231-cfd5-48bb-9b0e-4d15ce07695f","Type":"ContainerDied","Data":"76465ef6501b564c44f847dd800d4cc0cad4533bb4aef9ca43a9457976ffb2fb"} Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.398461 4792 scope.go:117] "RemoveContainer" containerID="76465ef6501b564c44f847dd800d4cc0cad4533bb4aef9ca43a9457976ffb2fb" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.441472 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.14516124 podStartE2EDuration="13.441453322s" podCreationTimestamp="2026-02-16 21:59:26 +0000 UTC" firstStartedPulling="2026-02-16 21:59:27.224118412 +0000 UTC m=+1299.877397303" lastFinishedPulling="2026-02-16 21:59:38.520410494 +0000 UTC m=+1311.173689385" observedRunningTime="2026-02-16 21:59:39.427102254 +0000 UTC m=+1312.080381155" watchObservedRunningTime="2026-02-16 21:59:39.441453322 +0000 UTC m=+1312.094732213" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.449805 4792 generic.go:334] "Generic (PLEG): container finished" podID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerID="59b1733af0ad6b6462bd0cf15a44b0b2ea9a98062f99a6575b8c6e60f93f071b" exitCode=1 Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.450663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678f746b4c-p48lm" event={"ID":"38a645f0-cc32-41d9-9309-22cd86985b4f","Type":"ContainerDied","Data":"59b1733af0ad6b6462bd0cf15a44b0b2ea9a98062f99a6575b8c6e60f93f071b"} Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.451063 4792 scope.go:117] "RemoveContainer" containerID="59b1733af0ad6b6462bd0cf15a44b0b2ea9a98062f99a6575b8c6e60f93f071b" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.706952 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.778080 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxqc\" (UniqueName: \"kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.778222 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.778263 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.778865 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.778956 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.779030 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.779108 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.779151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data\") pod \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\" (UID: \"e64dc7aa-7b06-4a29-9684-340f3aa33cfe\") " Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.780004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.780256 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs" (OuterVolumeSpecName: "logs") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.798190 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts" (OuterVolumeSpecName: "scripts") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.820762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc" (OuterVolumeSpecName: "kube-api-access-xkxqc") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "kube-api-access-xkxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.893422 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.893463 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.893476 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:39 crc kubenswrapper[4792]: I0216 21:59:39.893488 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxqc\" (UniqueName: \"kubernetes.io/projected/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-kube-api-access-xkxqc\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.044642 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" path="/var/lib/kubelet/pods/6ebd8871-a518-4c36-89af-cefd9a5835b8/volumes" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.045198 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" path="/var/lib/kubelet/pods/d0209b0b-6ef4-4595-80ad-27f346d3bbe1/volumes" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.299109 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9" (OuterVolumeSpecName: "glance") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.306004 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") on node \"crc\" " Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.369149 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.369356 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9") on node "crc" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.408457 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.435001 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.513477 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.530512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data" (OuterVolumeSpecName: "config-data") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.543569 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.585813 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e64dc7aa-7b06-4a29-9684-340f3aa33cfe" (UID: "e64dc7aa-7b06-4a29-9684-340f3aa33cfe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.597508 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerID="4bf7be4cd12148aaa82602773408aad06f3cc5ec2f63f4030105ad1e9bec3df3" exitCode=0 Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.597539 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerID="8975476d18dcaaf942344134778975b5bbabbf88e3c8d06bde73a17de3a25a8f" exitCode=2 Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.597549 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerID="bfd2fae14f45e46a6b547062ea302208192a9808c6e957ed34c6d659193cdc9d" exitCode=0 Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.597555 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerID="a4e0a88102556775a5229575cd8e26ab16eb9aabf65bd19281ab3dcd22931b5e" exitCode=0 Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.629620 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.643108 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64dc7aa-7b06-4a29-9684-340f3aa33cfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.649477 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" podStartSLOduration=12.649456356 podStartE2EDuration="12.649456356s" podCreationTimestamp="2026-02-16 21:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:40.648534219 +0000 UTC m=+1313.301813100" watchObservedRunningTime="2026-02-16 21:59:40.649456356 +0000 UTC m=+1313.302735257" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e64dc7aa-7b06-4a29-9684-340f3aa33cfe","Type":"ContainerDied","Data":"d5567924ca97846b5fd833f82b6000e5062e393c405a7ba0996f30a5b3b9c88c"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerDied","Data":"4bf7be4cd12148aaa82602773408aad06f3cc5ec2f63f4030105ad1e9bec3df3"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695846 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerDied","Data":"8975476d18dcaaf942344134778975b5bbabbf88e3c8d06bde73a17de3a25a8f"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerDied","Data":"bfd2fae14f45e46a6b547062ea302208192a9808c6e957ed34c6d659193cdc9d"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerDied","Data":"a4e0a88102556775a5229575cd8e26ab16eb9aabf65bd19281ab3dcd22931b5e"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" event={"ID":"a94eb231-cfd5-48bb-9b0e-4d15ce07695f","Type":"ContainerStarted","Data":"66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518"} Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.695907 4792 scope.go:117] "RemoveContainer" containerID="b502d9d3e57eb08d08035cf2fdac8cc8c7c7d30a9921b5fa533d216034a1a605" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.796334 4792 scope.go:117] "RemoveContainer" containerID="292a62c4357341975de7a60cb9ce980634c1fc9a1bba2ed88e7873810d1bcf82" Feb 16 21:59:40 crc kubenswrapper[4792]: I0216 21:59:40.982785 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.001353 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.016044 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061565 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061630 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061651 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061748 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061779 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zw7c\" (UniqueName: \"kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061807 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.061870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd\") pod \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\" (UID: \"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb\") " Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.063825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.066137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.068987 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts" (OuterVolumeSpecName: "scripts") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.077524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c" (OuterVolumeSpecName: "kube-api-access-4zw7c") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "kube-api-access-4zw7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.111980 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113718 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113753 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113771 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-central-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113779 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-central-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113795 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113802 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113836 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="proxy-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113846 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="proxy-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113861 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-notification-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113869 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-notification-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113880 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-api" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113890 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-api" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113906 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113913 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113924 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="sg-core" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113932 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="sg-core" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113945 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="dnsmasq-dns" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113952 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="dnsmasq-dns" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.113980 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.113986 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.114008 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="init" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114015 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="init" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.114030 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-log" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114039 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-log" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114322 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114351 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114367 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="sg-core" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114377 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e87464-f75c-47e0-b302-a98fe79d4f43" containerName="dnsmasq-dns" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114385 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-log" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114396 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="proxy-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114407 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114440 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-central-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114454 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1262ac7e-ff1e-40b4-be35-03a9314fef99" containerName="neutron-api" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114465 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" containerName="glance-httpd" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.114476 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" containerName="ceilometer-notification-agent" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.117935 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.120936 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.121130 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.132947 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.165906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166087 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166283 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166356 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166371 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8hcf\" (UniqueName: \"kubernetes.io/projected/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-kube-api-access-l8hcf\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166439 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166452 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166460 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zw7c\" (UniqueName: \"kubernetes.io/projected/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-kube-api-access-4zw7c\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.166472 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.172948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.263702 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data" (OuterVolumeSpecName: "config-data") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.268976 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269097 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8hcf\" (UniqueName: \"kubernetes.io/projected/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-kube-api-access-l8hcf\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269309 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.269319 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.272808 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.273541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.277972 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.281077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.281465 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.281507 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fda07fe1d1b61a7ca2f0646c25157ff7862921af25dfa15dc58bc6fca46e142c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.287304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.288350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.290845 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8hcf\" (UniqueName: \"kubernetes.io/projected/2fa4253d-0a12-4f95-a89e-ab8cf0507ded-kube-api-access-l8hcf\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.305697 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" (UID: "7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.371670 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.415278 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0404c11b-6f3f-48e6-b556-13337f9d1fd9\") pod \"glance-default-external-api-0\" (UID: \"2fa4253d-0a12-4f95-a89e-ab8cf0507ded\") " pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.615496 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.642697 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.678894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb","Type":"ContainerDied","Data":"a949e3739060aab1a9971c9fd73dc574641f3abf798f2336648f859cdb195caf"} Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.678910 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.678975 4792 scope.go:117] "RemoveContainer" containerID="4bf7be4cd12148aaa82602773408aad06f3cc5ec2f63f4030105ad1e9bec3df3" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.698847 4792 generic.go:334] "Generic (PLEG): container finished" podID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerID="66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518" exitCode=1 Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.698934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" event={"ID":"a94eb231-cfd5-48bb-9b0e-4d15ce07695f","Type":"ContainerDied","Data":"66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518"} Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.700219 4792 scope.go:117] "RemoveContainer" containerID="66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.700527 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bf864b9dc-xnqfg_openstack(a94eb231-cfd5-48bb-9b0e-4d15ce07695f)\"" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.704258 4792 generic.go:334] "Generic (PLEG): container finished" podID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" exitCode=1 Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.704323 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678f746b4c-p48lm" event={"ID":"38a645f0-cc32-41d9-9309-22cd86985b4f","Type":"ContainerDied","Data":"f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e"} Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.705225 4792 scope.go:117] "RemoveContainer" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" Feb 16 21:59:41 crc kubenswrapper[4792]: E0216 21:59:41.705502 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-678f746b4c-p48lm_openstack(38a645f0-cc32-41d9-9309-22cd86985b4f)\"" pod="openstack/heat-api-678f746b4c-p48lm" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.708224 4792 generic.go:334] "Generic (PLEG): container finished" podID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerID="2b3b19200f4b032f8178f3c40fbfe90c01154988f221c2db38d6aa55f60c917f" exitCode=0 Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.708265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerDied","Data":"2b3b19200f4b032f8178f3c40fbfe90c01154988f221c2db38d6aa55f60c917f"} Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.733860 4792 scope.go:117] "RemoveContainer" containerID="8975476d18dcaaf942344134778975b5bbabbf88e3c8d06bde73a17de3a25a8f" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.786186 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.793474 4792 scope.go:117] "RemoveContainer" containerID="bfd2fae14f45e46a6b547062ea302208192a9808c6e957ed34c6d659193cdc9d" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.794460 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.807811 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.829395 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.844301 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.844835 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.855631 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.855837 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.874909 4792 scope.go:117] "RemoveContainer" containerID="a4e0a88102556775a5229575cd8e26ab16eb9aabf65bd19281ab3dcd22931b5e" Feb 16 21:59:41 crc kubenswrapper[4792]: I0216 21:59:41.938972 4792 scope.go:117] "RemoveContainer" containerID="76465ef6501b564c44f847dd800d4cc0cad4533bb4aef9ca43a9457976ffb2fb" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.005857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006034 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006103 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxtz\" (UniqueName: \"kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.006609 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.051339 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb" path="/var/lib/kubelet/pods/7b7c03ab-9a6f-4b82-8fdd-daec075fd7cb/volumes" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.052337 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64dc7aa-7b06-4a29-9684-340f3aa33cfe" path="/var/lib/kubelet/pods/e64dc7aa-7b06-4a29-9684-340f3aa33cfe/volumes" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.083977 4792 scope.go:117] "RemoveContainer" containerID="59b1733af0ad6b6462bd0cf15a44b0b2ea9a98062f99a6575b8c6e60f93f071b" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.091036 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9686f857b-mxcsr" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxtz\" (UniqueName: \"kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108303 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108445 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108461 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108581 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.108984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.110649 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.128440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.137477 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.137638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.137947 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.139460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxtz\" (UniqueName: \"kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz\") pod \"ceilometer-0\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.187110 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.187472 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-56979bc86d-lb4lw" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-log" containerID="cri-o://57a1ba172d41bee6ec9de4e9541ccf03b6291834f9f3bdf34c4527795c990110" gracePeriod=30 Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.188888 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-56979bc86d-lb4lw" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-api" containerID="cri-o://c408ae8f631e5d80a32f245a88269c418e88f194d7645790af7a8a0d7e072ca9" gracePeriod=30 Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.248825 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.275960 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.421551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.421631 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.421717 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nvn\" (UniqueName: \"kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.421772 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.422704 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.422800 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.422871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.423029 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.423091 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs\") pod \"d768be52-4cc1-48af-9ba3-dc7db20975c3\" (UID: \"d768be52-4cc1-48af-9ba3-dc7db20975c3\") " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.423668 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs" (OuterVolumeSpecName: "logs") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.424281 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.424309 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d768be52-4cc1-48af-9ba3-dc7db20975c3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.426792 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn" (OuterVolumeSpecName: "kube-api-access-g2nvn") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "kube-api-access-g2nvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.428653 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts" (OuterVolumeSpecName: "scripts") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.472089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2" (OuterVolumeSpecName: "glance") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.511283 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 21:59:42 crc kubenswrapper[4792]: W0216 21:59:42.520728 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa4253d_0a12_4f95_a89e_ab8cf0507ded.slice/crio-2b5c11be9677ab5a7f1ea401b9c7ca43c6c606df3b50086c6707df9c97c2a43d WatchSource:0}: Error finding container 2b5c11be9677ab5a7f1ea401b9c7ca43c6c606df3b50086c6707df9c97c2a43d: Status 404 returned error can't find the container with id 2b5c11be9677ab5a7f1ea401b9c7ca43c6c606df3b50086c6707df9c97c2a43d Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.530484 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") on node \"crc\" " Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.530528 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.530542 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nvn\" (UniqueName: \"kubernetes.io/projected/d768be52-4cc1-48af-9ba3-dc7db20975c3-kube-api-access-g2nvn\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.587936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data" (OuterVolumeSpecName: "config-data") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.607470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.619560 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d768be52-4cc1-48af-9ba3-dc7db20975c3" (UID: "d768be52-4cc1-48af-9ba3-dc7db20975c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.621648 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.621805 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2") on node "crc" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.633644 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.633689 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.633704 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.633718 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d768be52-4cc1-48af-9ba3-dc7db20975c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.735204 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d768be52-4cc1-48af-9ba3-dc7db20975c3","Type":"ContainerDied","Data":"42cdef44c36b584888bbd452100382563cd8a27f6bd837a0e48026e3083b9d62"} Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.735265 4792 scope.go:117] "RemoveContainer" containerID="2b3b19200f4b032f8178f3c40fbfe90c01154988f221c2db38d6aa55f60c917f" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.735427 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.739362 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa4253d-0a12-4f95-a89e-ab8cf0507ded","Type":"ContainerStarted","Data":"2b5c11be9677ab5a7f1ea401b9c7ca43c6c606df3b50086c6707df9c97c2a43d"} Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.752887 4792 generic.go:334] "Generic (PLEG): container finished" podID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerID="57a1ba172d41bee6ec9de4e9541ccf03b6291834f9f3bdf34c4527795c990110" exitCode=143 Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.752988 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerDied","Data":"57a1ba172d41bee6ec9de4e9541ccf03b6291834f9f3bdf34c4527795c990110"} Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.762172 4792 scope.go:117] "RemoveContainer" containerID="66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518" Feb 16 21:59:42 crc kubenswrapper[4792]: E0216 21:59:42.762452 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bf864b9dc-xnqfg_openstack(a94eb231-cfd5-48bb-9b0e-4d15ce07695f)\"" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.771700 4792 scope.go:117] "RemoveContainer" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" Feb 16 21:59:42 crc kubenswrapper[4792]: E0216 21:59:42.771938 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-678f746b4c-p48lm_openstack(38a645f0-cc32-41d9-9309-22cd86985b4f)\"" pod="openstack/heat-api-678f746b4c-p48lm" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.791234 4792 scope.go:117] "RemoveContainer" containerID="80619daf70af9937b66b9b66ae6d92131204ed3a4e1011364083e1b29c0da5c8" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.862278 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.891710 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.920217 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:42 crc kubenswrapper[4792]: E0216 21:59:42.920863 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-log" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.920886 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-log" Feb 16 21:59:42 crc kubenswrapper[4792]: E0216 21:59:42.920925 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-httpd" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.920932 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-httpd" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.921214 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-httpd" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.921233 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" containerName="glance-log" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.922802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.925905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.926144 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.948309 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:42 crc kubenswrapper[4792]: I0216 21:59:42.959730 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.048909 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.049305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-logs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.049621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpjm\" (UniqueName: \"kubernetes.io/projected/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-kube-api-access-skpjm\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.049792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.049987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.050491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.050663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.050980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.152960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-logs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153011 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpjm\" (UniqueName: \"kubernetes.io/projected/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-kube-api-access-skpjm\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153182 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.153821 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-logs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.155050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.162233 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.163220 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.163263 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ec818cdac5fc3207a3e7d919212a3c077b51c825579526e875ab6fe8a7327b5/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.163398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.163887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.172161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.177347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpjm\" (UniqueName: \"kubernetes.io/projected/35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38-kube-api-access-skpjm\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.249544 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-348fcf31-17fd-4d91-9e22-9bfff1dbfcf2\") pod \"glance-default-internal-api-0\" (UID: \"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38\") " pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.257331 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.728153 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8bbwt"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.730588 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.738439 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8bbwt"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.825232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerStarted","Data":"2ce74918d1d928e648fd3196a9df1cd122942424a45165d2756c06941beffd2f"} Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.832756 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lz59p"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.834528 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa4253d-0a12-4f95-a89e-ab8cf0507ded","Type":"ContainerStarted","Data":"4906b3e6a4ebabdbbdc435cfafe2533074471e7b663ee53f4eb6d1a21629b26a"} Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.834826 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.865452 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lz59p"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.880351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.880419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blnj\" (UniqueName: \"kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.888516 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.889499 4792 scope.go:117] "RemoveContainer" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" Feb 16 21:59:43 crc kubenswrapper[4792]: E0216 21:59:43.890361 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-678f746b4c-p48lm_openstack(38a645f0-cc32-41d9-9309-22cd86985b4f)\"" pod="openstack/heat-api-678f746b4c-p48lm" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.890696 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.929708 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-x7q8m"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.931163 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.948911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-caca-account-create-update-rbbc9"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.950423 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.954712 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-caca-account-create-update-rbbc9"] Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.955157 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.985956 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.986415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.986448 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9blnj\" (UniqueName: \"kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.986521 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnflk\" (UniqueName: \"kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.988751 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.990274 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:43 crc kubenswrapper[4792]: I0216 21:59:43.991361 4792 scope.go:117] "RemoveContainer" containerID="66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518" Feb 16 21:59:43 crc kubenswrapper[4792]: E0216 21:59:43.991666 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bf864b9dc-xnqfg_openstack(a94eb231-cfd5-48bb-9b0e-4d15ce07695f)\"" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.035232 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9blnj\" (UniqueName: \"kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj\") pod \"nova-api-db-create-8bbwt\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.062450 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.088550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnflk\" (UniqueName: \"kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.088678 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.088750 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.088884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.088961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjst\" (UniqueName: \"kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.089065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7s2\" (UniqueName: \"kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.090984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.114858 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnflk\" (UniqueName: \"kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk\") pod \"nova-cell0-db-create-lz59p\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.120870 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d768be52-4cc1-48af-9ba3-dc7db20975c3" path="/var/lib/kubelet/pods/d768be52-4cc1-48af-9ba3-dc7db20975c3/volumes" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.121796 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x7q8m"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.121825 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.168158 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-92cd-account-create-update-7vhk7"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.169841 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.178584 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.187176 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-92cd-account-create-update-7vhk7"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.194875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djjst\" (UniqueName: \"kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.194986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7s2\" (UniqueName: \"kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.195205 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.195301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.208308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.209226 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.217021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7s2\" (UniqueName: \"kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2\") pod \"nova-api-caca-account-create-update-rbbc9\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.226479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djjst\" (UniqueName: \"kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst\") pod \"nova-cell1-db-create-x7q8m\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.256490 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.280982 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.308981 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcdjv\" (UniqueName: \"kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.309132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.323509 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-96ae-account-create-update-qpv9p"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.325248 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.330478 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.335193 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96ae-account-create-update-qpv9p"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.420222 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.420275 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcdjv\" (UniqueName: \"kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.420362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6p5p\" (UniqueName: \"kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.420440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.421332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.461710 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcdjv\" (UniqueName: \"kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv\") pod \"nova-cell0-92cd-account-create-update-7vhk7\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.515467 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.522411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.522521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6p5p\" (UniqueName: \"kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.524124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.534279 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.552014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6p5p\" (UniqueName: \"kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p\") pod \"nova-cell1-96ae-account-create-update-qpv9p\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.655189 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.924749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa4253d-0a12-4f95-a89e-ab8cf0507ded","Type":"ContainerStarted","Data":"a2cc3e1646fd8a9413a13c6cb731229d946c3eee4576e9f93cd8596d2c9d86d8"} Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.928281 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38","Type":"ContainerStarted","Data":"509033e5a9ba942b8b6134aaac28c09be6d43715157d0f1d656413eb77b16031"} Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.939495 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8bbwt"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.952421 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lz59p"] Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.959540 4792 scope.go:117] "RemoveContainer" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" Feb 16 21:59:44 crc kubenswrapper[4792]: E0216 21:59:44.959795 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-678f746b4c-p48lm_openstack(38a645f0-cc32-41d9-9309-22cd86985b4f)\"" pod="openstack/heat-api-678f746b4c-p48lm" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.960065 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerStarted","Data":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} Feb 16 21:59:44 crc kubenswrapper[4792]: I0216 21:59:44.980985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.980964154 podStartE2EDuration="3.980964154s" podCreationTimestamp="2026-02-16 21:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:44.957463787 +0000 UTC m=+1317.610742678" watchObservedRunningTime="2026-02-16 21:59:44.980964154 +0000 UTC m=+1317.634243045" Feb 16 21:59:44 crc kubenswrapper[4792]: W0216 21:59:44.982643 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod704c2346_0609_42f5_89da_db7d8950ea83.slice/crio-81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af WatchSource:0}: Error finding container 81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af: Status 404 returned error can't find the container with id 81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af Feb 16 21:59:45 crc kubenswrapper[4792]: W0216 21:59:45.000752 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c WatchSource:0}: Error finding container 860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c: Status 404 returned error can't find the container with id 860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c Feb 16 21:59:45 crc kubenswrapper[4792]: I0216 21:59:45.245474 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x7q8m"] Feb 16 21:59:45 crc kubenswrapper[4792]: I0216 21:59:45.448428 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-92cd-account-create-update-7vhk7"] Feb 16 21:59:45 crc kubenswrapper[4792]: I0216 21:59:45.461969 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-caca-account-create-update-rbbc9"] Feb 16 21:59:45 crc kubenswrapper[4792]: W0216 21:59:45.480851 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0297de14_9244_4cda_93b7_a75b5ac58348.slice/crio-1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae WatchSource:0}: Error finding container 1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae: Status 404 returned error can't find the container with id 1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae Feb 16 21:59:45 crc kubenswrapper[4792]: I0216 21:59:45.616384 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96ae-account-create-update-qpv9p"] Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.020018 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" event={"ID":"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5","Type":"ContainerStarted","Data":"1d3c21e4347845c095baa8683dda50205aa5b17f8bf6bfc5ce6947b9af432009"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.069986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" event={"ID":"b77f3054-3a84-4e5f-8c60-b5906b353be7","Type":"ContainerStarted","Data":"4aa21d327a33eb42a32db90d3eb50299ee9a572a8d2edc80c752489f3509669e"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.079153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-caca-account-create-update-rbbc9" event={"ID":"0297de14-9244-4cda-93b7-a75b5ac58348","Type":"ContainerStarted","Data":"442b8137dfec4f7543c25d6b561a36996e2d2bb50837b4b7d632c0d7a855f393"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.079190 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-caca-account-create-update-rbbc9" event={"ID":"0297de14-9244-4cda-93b7-a75b5ac58348","Type":"ContainerStarted","Data":"1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.157085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x7q8m" event={"ID":"48a55719-97b7-4243-bfa3-e918b61ec76a","Type":"ContainerStarted","Data":"9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.157132 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x7q8m" event={"ID":"48a55719-97b7-4243-bfa3-e918b61ec76a","Type":"ContainerStarted","Data":"200850b4411a1b6406bac06812a9b88b1b3e8578c0e24f11737f7d51b4864024"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.161535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lz59p" event={"ID":"704c2346-0609-42f5-89da-db7d8950ea83","Type":"ContainerStarted","Data":"25d8afdb9806799f24e58bdeb956bc822f941d5e88b1763a8eca4e422d7d234d"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.161588 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lz59p" event={"ID":"704c2346-0609-42f5-89da-db7d8950ea83","Type":"ContainerStarted","Data":"81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.167232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38","Type":"ContainerStarted","Data":"cc2c7c77894b772d687145539756356ba42d4f2bba836f30161a818787c7fb77"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.169830 4792 generic.go:334] "Generic (PLEG): container finished" podID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerID="c408ae8f631e5d80a32f245a88269c418e88f194d7645790af7a8a0d7e072ca9" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.169901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerDied","Data":"c408ae8f631e5d80a32f245a88269c418e88f194d7645790af7a8a0d7e072ca9"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.180978 4792 generic.go:334] "Generic (PLEG): container finished" podID="25b826e6-839e-4981-9c0e-1ae295f48f5b" containerID="65cc72c66e6922ac3ace2620557de53d5d6a57924fa68fc1c84d73667a5a1615" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.181075 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8bbwt" event={"ID":"25b826e6-839e-4981-9c0e-1ae295f48f5b","Type":"ContainerDied","Data":"65cc72c66e6922ac3ace2620557de53d5d6a57924fa68fc1c84d73667a5a1615"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.181109 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8bbwt" event={"ID":"25b826e6-839e-4981-9c0e-1ae295f48f5b","Type":"ContainerStarted","Data":"860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.194785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerStarted","Data":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.204196 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-caca-account-create-update-rbbc9" podStartSLOduration=3.20417745 podStartE2EDuration="3.20417745s" podCreationTimestamp="2026-02-16 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:46.186308942 +0000 UTC m=+1318.839587833" watchObservedRunningTime="2026-02-16 21:59:46.20417745 +0000 UTC m=+1318.857456341" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.209646 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-x7q8m" podStartSLOduration=3.209628756 podStartE2EDuration="3.209628756s" podCreationTimestamp="2026-02-16 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:46.207402642 +0000 UTC m=+1318.860681533" watchObservedRunningTime="2026-02-16 21:59:46.209628756 +0000 UTC m=+1318.862907647" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.265138 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lz59p" podStartSLOduration=3.265116182 podStartE2EDuration="3.265116182s" podCreationTimestamp="2026-02-16 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:46.231695873 +0000 UTC m=+1318.884974764" watchObservedRunningTime="2026-02-16 21:59:46.265116182 +0000 UTC m=+1318.918395083" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.426857 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:59:47 crc kubenswrapper[4792]: E0216 21:59:46.520093 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a55719_97b7_4243_bfa3_e918b61ec76a.slice/crio-conmon-9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod704c2346_0609_42f5_89da_db7d8950ea83.slice/crio-conmon-25d8afdb9806799f24e58bdeb956bc822f941d5e88b1763a8eca4e422d7d234d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a55719_97b7_4243_bfa3_e918b61ec76a.slice/crio-9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0297de14_9244_4cda_93b7_a75b5ac58348.slice/crio-conmon-442b8137dfec4f7543c25d6b561a36996e2d2bb50837b4b7d632c0d7a855f393.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550572 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550796 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550852 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.550979 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.551016 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxkjj\" (UniqueName: \"kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj\") pod \"4654e37f-1c84-466d-a2a7-ada1474f811c\" (UID: \"4654e37f-1c84-466d-a2a7-ada1474f811c\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.553351 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs" (OuterVolumeSpecName: "logs") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.570575 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj" (OuterVolumeSpecName: "kube-api-access-bxkjj") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "kube-api-access-bxkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.570726 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts" (OuterVolumeSpecName: "scripts") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.653772 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxkjj\" (UniqueName: \"kubernetes.io/projected/4654e37f-1c84-466d-a2a7-ada1474f811c-kube-api-access-bxkjj\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.653796 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.653806 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4654e37f-1c84-466d-a2a7-ada1474f811c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.714072 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-664b984f-mtmnp" podUID="6ebd8871-a518-4c36-89af-cefd9a5835b8" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.765548 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.770666 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.779649 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-745698795b-zlr5t" podUID="d0209b0b-6ef4-4595-80ad-27f346d3bbe1" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.823713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data" (OuterVolumeSpecName: "config-data") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.859652 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.859952 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.859963 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.967561 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4654e37f-1c84-466d-a2a7-ada1474f811c" (UID: "4654e37f-1c84-466d-a2a7-ada1474f811c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:46.979299 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4654e37f-1c84-466d-a2a7-ada1474f811c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.207615 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38","Type":"ContainerStarted","Data":"de34b0ae6491e5bf2698d789e939ea177d41c0fab097c808aadc9c6e84e9801b"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.212412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerStarted","Data":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.215419 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56979bc86d-lb4lw" event={"ID":"4654e37f-1c84-466d-a2a7-ada1474f811c","Type":"ContainerDied","Data":"e3bba7580c0ce6a6ee2d1cbe9fafd053fd677cb20d4b6ec57f54c0fb6c0f43d8"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.215459 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56979bc86d-lb4lw" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.215639 4792 scope.go:117] "RemoveContainer" containerID="c408ae8f631e5d80a32f245a88269c418e88f194d7645790af7a8a0d7e072ca9" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.217467 4792 generic.go:334] "Generic (PLEG): container finished" podID="bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" containerID="900470fd3943b2444291dbb3c44a9dc953e1dc8f8ba04f6a812cf15af9a91a9c" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.217517 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" event={"ID":"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5","Type":"ContainerDied","Data":"900470fd3943b2444291dbb3c44a9dc953e1dc8f8ba04f6a812cf15af9a91a9c"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.220113 4792 generic.go:334] "Generic (PLEG): container finished" podID="b77f3054-3a84-4e5f-8c60-b5906b353be7" containerID="4de1185d51c9a3255491dfc5bddf05c12c939916cc016b92bc061fb1423e60fa" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.220156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" event={"ID":"b77f3054-3a84-4e5f-8c60-b5906b353be7","Type":"ContainerDied","Data":"4de1185d51c9a3255491dfc5bddf05c12c939916cc016b92bc061fb1423e60fa"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.224106 4792 generic.go:334] "Generic (PLEG): container finished" podID="0297de14-9244-4cda-93b7-a75b5ac58348" containerID="442b8137dfec4f7543c25d6b561a36996e2d2bb50837b4b7d632c0d7a855f393" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.224158 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-caca-account-create-update-rbbc9" event={"ID":"0297de14-9244-4cda-93b7-a75b5ac58348","Type":"ContainerDied","Data":"442b8137dfec4f7543c25d6b561a36996e2d2bb50837b4b7d632c0d7a855f393"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.228544 4792 generic.go:334] "Generic (PLEG): container finished" podID="48a55719-97b7-4243-bfa3-e918b61ec76a" containerID="9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.228585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x7q8m" event={"ID":"48a55719-97b7-4243-bfa3-e918b61ec76a","Type":"ContainerDied","Data":"9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.233767 4792 generic.go:334] "Generic (PLEG): container finished" podID="704c2346-0609-42f5-89da-db7d8950ea83" containerID="25d8afdb9806799f24e58bdeb956bc822f941d5e88b1763a8eca4e422d7d234d" exitCode=0 Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.233977 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lz59p" event={"ID":"704c2346-0609-42f5-89da-db7d8950ea83","Type":"ContainerDied","Data":"25d8afdb9806799f24e58bdeb956bc822f941d5e88b1763a8eca4e422d7d234d"} Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.242169 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.242151351 podStartE2EDuration="5.242151351s" podCreationTimestamp="2026-02-16 21:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:59:47.235151492 +0000 UTC m=+1319.888430383" watchObservedRunningTime="2026-02-16 21:59:47.242151351 +0000 UTC m=+1319.895430242" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.281185 4792 scope.go:117] "RemoveContainer" containerID="57a1ba172d41bee6ec9de4e9541ccf03b6291834f9f3bdf34c4527795c990110" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.387696 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.402168 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-56979bc86d-lb4lw"] Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.721451 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.799200 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts\") pod \"25b826e6-839e-4981-9c0e-1ae295f48f5b\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.799413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blnj\" (UniqueName: \"kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj\") pod \"25b826e6-839e-4981-9c0e-1ae295f48f5b\" (UID: \"25b826e6-839e-4981-9c0e-1ae295f48f5b\") " Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.801494 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25b826e6-839e-4981-9c0e-1ae295f48f5b" (UID: "25b826e6-839e-4981-9c0e-1ae295f48f5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.807138 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj" (OuterVolumeSpecName: "kube-api-access-9blnj") pod "25b826e6-839e-4981-9c0e-1ae295f48f5b" (UID: "25b826e6-839e-4981-9c0e-1ae295f48f5b"). InnerVolumeSpecName "kube-api-access-9blnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.907046 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b826e6-839e-4981-9c0e-1ae295f48f5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.907078 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9blnj\" (UniqueName: \"kubernetes.io/projected/25b826e6-839e-4981-9c0e-1ae295f48f5b-kube-api-access-9blnj\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:47 crc kubenswrapper[4792]: I0216 21:59:47.999579 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-789d9b5ffd-kgfxb" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.169845 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" path="/var/lib/kubelet/pods/4654e37f-1c84-466d-a2a7-ada1474f811c/volumes" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.199237 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.299814 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8bbwt" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.301020 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8bbwt" event={"ID":"25b826e6-839e-4981-9c0e-1ae295f48f5b","Type":"ContainerDied","Data":"860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c"} Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.301045 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.387130 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-fdc6c774c-p5p85" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.457985 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.757104 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.854346 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle\") pod \"38a645f0-cc32-41d9-9309-22cd86985b4f\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.854699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsgwr\" (UniqueName: \"kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr\") pod \"38a645f0-cc32-41d9-9309-22cd86985b4f\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.854898 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data\") pod \"38a645f0-cc32-41d9-9309-22cd86985b4f\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.855187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom\") pod \"38a645f0-cc32-41d9-9309-22cd86985b4f\" (UID: \"38a645f0-cc32-41d9-9309-22cd86985b4f\") " Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.862945 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr" (OuterVolumeSpecName: "kube-api-access-vsgwr") pod "38a645f0-cc32-41d9-9309-22cd86985b4f" (UID: "38a645f0-cc32-41d9-9309-22cd86985b4f"). InnerVolumeSpecName "kube-api-access-vsgwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.864437 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38a645f0-cc32-41d9-9309-22cd86985b4f" (UID: "38a645f0-cc32-41d9-9309-22cd86985b4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.893756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a645f0-cc32-41d9-9309-22cd86985b4f" (UID: "38a645f0-cc32-41d9-9309-22cd86985b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.925092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-dcdcd9bbc-f9nr2" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.958782 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.958812 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsgwr\" (UniqueName: \"kubernetes.io/projected/38a645f0-cc32-41d9-9309-22cd86985b4f-kube-api-access-vsgwr\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:48 crc kubenswrapper[4792]: I0216 21:59:48.958822 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.015627 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data" (OuterVolumeSpecName: "config-data") pod "38a645f0-cc32-41d9-9309-22cd86985b4f" (UID: "38a645f0-cc32-41d9-9309-22cd86985b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.018471 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.018714 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-75477f9d95-6ddxt" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerName="heat-engine" containerID="cri-o://ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" gracePeriod=60 Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.061693 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a645f0-cc32-41d9-9309-22cd86985b4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.400042 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.400825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerStarted","Data":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.401817 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.450013 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x7q8m" event={"ID":"48a55719-97b7-4243-bfa3-e918b61ec76a","Type":"ContainerDied","Data":"200850b4411a1b6406bac06812a9b88b1b3e8578c0e24f11737f7d51b4864024"} Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.450064 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200850b4411a1b6406bac06812a9b88b1b3e8578c0e24f11737f7d51b4864024" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.450150 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x7q8m" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.502893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678f746b4c-p48lm" event={"ID":"38a645f0-cc32-41d9-9309-22cd86985b4f","Type":"ContainerDied","Data":"d0f61a3a77405da44052a0d9381d9af50f163ad308695c3280c70a6c2d6e1f2d"} Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.502949 4792 scope.go:117] "RemoveContainer" containerID="f52b346831812b70dd3ede4a9b36e75c005d3e1f84af37a8fa42cc20eaf2746e" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.503038 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678f746b4c-p48lm" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.530396 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.560037072 podStartE2EDuration="8.530376687s" podCreationTimestamp="2026-02-16 21:59:41 +0000 UTC" firstStartedPulling="2026-02-16 21:59:42.973553311 +0000 UTC m=+1315.626832192" lastFinishedPulling="2026-02-16 21:59:47.943892916 +0000 UTC m=+1320.597171807" observedRunningTime="2026-02-16 21:59:49.47171671 +0000 UTC m=+1322.124995601" watchObservedRunningTime="2026-02-16 21:59:49.530376687 +0000 UTC m=+1322.183655578" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.594909 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djjst\" (UniqueName: \"kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst\") pod \"48a55719-97b7-4243-bfa3-e918b61ec76a\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.595292 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts\") pod \"48a55719-97b7-4243-bfa3-e918b61ec76a\" (UID: \"48a55719-97b7-4243-bfa3-e918b61ec76a\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.596908 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48a55719-97b7-4243-bfa3-e918b61ec76a" (UID: "48a55719-97b7-4243-bfa3-e918b61ec76a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.605256 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.607898 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst" (OuterVolumeSpecName: "kube-api-access-djjst") pod "48a55719-97b7-4243-bfa3-e918b61ec76a" (UID: "48a55719-97b7-4243-bfa3-e918b61ec76a"). InnerVolumeSpecName "kube-api-access-djjst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.610050 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.632244 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-678f746b4c-p48lm"] Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.662577 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.665039 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.708076 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djjst\" (UniqueName: \"kubernetes.io/projected/48a55719-97b7-4243-bfa3-e918b61ec76a-kube-api-access-djjst\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.708111 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a55719-97b7-4243-bfa3-e918b61ec76a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.715044 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.718565 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809360 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts\") pod \"0297de14-9244-4cda-93b7-a75b5ac58348\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809416 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ft6\" (UniqueName: \"kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6\") pod \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809451 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcdjv\" (UniqueName: \"kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv\") pod \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809497 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s7s2\" (UniqueName: \"kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2\") pod \"0297de14-9244-4cda-93b7-a75b5ac58348\" (UID: \"0297de14-9244-4cda-93b7-a75b5ac58348\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom\") pod \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809637 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data\") pod \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809700 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts\") pod \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\" (UID: \"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.809724 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle\") pod \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\" (UID: \"a94eb231-cfd5-48bb-9b0e-4d15ce07695f\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.810778 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0297de14-9244-4cda-93b7-a75b5ac58348" (UID: "0297de14-9244-4cda-93b7-a75b5ac58348"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.811056 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" (UID: "bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.813819 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2" (OuterVolumeSpecName: "kube-api-access-4s7s2") pod "0297de14-9244-4cda-93b7-a75b5ac58348" (UID: "0297de14-9244-4cda-93b7-a75b5ac58348"). InnerVolumeSpecName "kube-api-access-4s7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.815489 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a94eb231-cfd5-48bb-9b0e-4d15ce07695f" (UID: "a94eb231-cfd5-48bb-9b0e-4d15ce07695f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.815641 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6" (OuterVolumeSpecName: "kube-api-access-54ft6") pod "a94eb231-cfd5-48bb-9b0e-4d15ce07695f" (UID: "a94eb231-cfd5-48bb-9b0e-4d15ce07695f"). InnerVolumeSpecName "kube-api-access-54ft6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.815582 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv" (OuterVolumeSpecName: "kube-api-access-lcdjv") pod "bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" (UID: "bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5"). InnerVolumeSpecName "kube-api-access-lcdjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.855696 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a94eb231-cfd5-48bb-9b0e-4d15ce07695f" (UID: "a94eb231-cfd5-48bb-9b0e-4d15ce07695f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.914966 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts\") pod \"704c2346-0609-42f5-89da-db7d8950ea83\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.915065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnflk\" (UniqueName: \"kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk\") pod \"704c2346-0609-42f5-89da-db7d8950ea83\" (UID: \"704c2346-0609-42f5-89da-db7d8950ea83\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.915099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts\") pod \"b77f3054-3a84-4e5f-8c60-b5906b353be7\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.915207 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6p5p\" (UniqueName: \"kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p\") pod \"b77f3054-3a84-4e5f-8c60-b5906b353be7\" (UID: \"b77f3054-3a84-4e5f-8c60-b5906b353be7\") " Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.919894 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b77f3054-3a84-4e5f-8c60-b5906b353be7" (UID: "b77f3054-3a84-4e5f-8c60-b5906b353be7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923699 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b77f3054-3a84-4e5f-8c60-b5906b353be7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923738 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923757 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923777 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297de14-9244-4cda-93b7-a75b5ac58348-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923817 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ft6\" (UniqueName: \"kubernetes.io/projected/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-kube-api-access-54ft6\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923836 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcdjv\" (UniqueName: \"kubernetes.io/projected/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5-kube-api-access-lcdjv\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923849 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s7s2\" (UniqueName: \"kubernetes.io/projected/0297de14-9244-4cda-93b7-a75b5ac58348-kube-api-access-4s7s2\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.923867 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.929470 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "704c2346-0609-42f5-89da-db7d8950ea83" (UID: "704c2346-0609-42f5-89da-db7d8950ea83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.944812 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data" (OuterVolumeSpecName: "config-data") pod "a94eb231-cfd5-48bb-9b0e-4d15ce07695f" (UID: "a94eb231-cfd5-48bb-9b0e-4d15ce07695f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.945083 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p" (OuterVolumeSpecName: "kube-api-access-s6p5p") pod "b77f3054-3a84-4e5f-8c60-b5906b353be7" (UID: "b77f3054-3a84-4e5f-8c60-b5906b353be7"). InnerVolumeSpecName "kube-api-access-s6p5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:49 crc kubenswrapper[4792]: I0216 21:59:49.945194 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk" (OuterVolumeSpecName: "kube-api-access-mnflk") pod "704c2346-0609-42f5-89da-db7d8950ea83" (UID: "704c2346-0609-42f5-89da-db7d8950ea83"). InnerVolumeSpecName "kube-api-access-mnflk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.030109 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/704c2346-0609-42f5-89da-db7d8950ea83-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.030735 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnflk\" (UniqueName: \"kubernetes.io/projected/704c2346-0609-42f5-89da-db7d8950ea83-kube-api-access-mnflk\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.030894 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94eb231-cfd5-48bb-9b0e-4d15ce07695f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.030913 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6p5p\" (UniqueName: \"kubernetes.io/projected/b77f3054-3a84-4e5f-8c60-b5906b353be7-kube-api-access-s6p5p\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.045698 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" path="/var/lib/kubelet/pods/38a645f0-cc32-41d9-9309-22cd86985b4f/volumes" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.513628 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-caca-account-create-update-rbbc9" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.513633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-caca-account-create-update-rbbc9" event={"ID":"0297de14-9244-4cda-93b7-a75b5ac58348","Type":"ContainerDied","Data":"1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae"} Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.514052 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff63e3784e94abe2bdf7fca5e482c096aecdc58c3e991deb80d681b2c3903ae" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.517032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lz59p" event={"ID":"704c2346-0609-42f5-89da-db7d8950ea83","Type":"ContainerDied","Data":"81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af"} Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.517085 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b5ba8a2e11339ff8d36c226d588a8b43238d6d3f9ace85adf9135599c236af" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.517057 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lz59p" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.520621 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.520609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf864b9dc-xnqfg" event={"ID":"a94eb231-cfd5-48bb-9b0e-4d15ce07695f","Type":"ContainerDied","Data":"7e0cfd4bf2323b15be1963283a0a6deb08d28dc41dba1cfc29aee3f64d9292b6"} Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.520783 4792 scope.go:117] "RemoveContainer" containerID="66ff80b069c6378ef3333add5469d97eb3d438aee399159a5d9449ddf3215518" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.522710 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.522768 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-92cd-account-create-update-7vhk7" event={"ID":"bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5","Type":"ContainerDied","Data":"1d3c21e4347845c095baa8683dda50205aa5b17f8bf6bfc5ce6947b9af432009"} Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.522869 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3c21e4347845c095baa8683dda50205aa5b17f8bf6bfc5ce6947b9af432009" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.525019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" event={"ID":"b77f3054-3a84-4e5f-8c60-b5906b353be7","Type":"ContainerDied","Data":"4aa21d327a33eb42a32db90d3eb50299ee9a572a8d2edc80c752489f3509669e"} Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.525059 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa21d327a33eb42a32db90d3eb50299ee9a572a8d2edc80c752489f3509669e" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.525095 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96ae-account-create-update-qpv9p" Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.557579 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:50 crc kubenswrapper[4792]: I0216 21:59:50.570818 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6bf864b9dc-xnqfg"] Feb 16 21:59:51 crc kubenswrapper[4792]: I0216 21:59:51.177891 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:51 crc kubenswrapper[4792]: E0216 21:59:51.602920 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:59:51 crc kubenswrapper[4792]: E0216 21:59:51.604438 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:59:51 crc kubenswrapper[4792]: E0216 21:59:51.605627 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 21:59:51 crc kubenswrapper[4792]: E0216 21:59:51.605699 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75477f9d95-6ddxt" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerName="heat-engine" Feb 16 21:59:51 crc kubenswrapper[4792]: I0216 21:59:51.616545 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:59:51 crc kubenswrapper[4792]: I0216 21:59:51.616615 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 21:59:51 crc kubenswrapper[4792]: I0216 21:59:51.659253 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:59:51 crc kubenswrapper[4792]: I0216 21:59:51.673218 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.071627 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" path="/var/lib/kubelet/pods/a94eb231-cfd5-48bb-9b0e-4d15ce07695f/volumes" Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.551899 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.551952 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.552260 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-central-agent" containerID="cri-o://15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" gracePeriod=30 Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.552391 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="proxy-httpd" containerID="cri-o://9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" gracePeriod=30 Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.552435 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="sg-core" containerID="cri-o://8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" gracePeriod=30 Feb 16 21:59:52 crc kubenswrapper[4792]: I0216 21:59:52.552464 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-notification-agent" containerID="cri-o://56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" gracePeriod=30 Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.258277 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.259020 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.302045 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.325063 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.464934 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.564841 4792 generic.go:334] "Generic (PLEG): container finished" podID="cb02bce2-5353-4048-87f6-204231f09f2d" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" exitCode=0 Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.564874 4792 generic.go:334] "Generic (PLEG): container finished" podID="cb02bce2-5353-4048-87f6-204231f09f2d" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" exitCode=2 Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.564883 4792 generic.go:334] "Generic (PLEG): container finished" podID="cb02bce2-5353-4048-87f6-204231f09f2d" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" exitCode=0 Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.564896 4792 generic.go:334] "Generic (PLEG): container finished" podID="cb02bce2-5353-4048-87f6-204231f09f2d" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" exitCode=0 Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566743 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerDied","Data":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566807 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerDied","Data":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566825 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerDied","Data":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566838 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerDied","Data":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566857 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566875 4792 scope.go:117] "RemoveContainer" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566874 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb02bce2-5353-4048-87f6-204231f09f2d","Type":"ContainerDied","Data":"2ce74918d1d928e648fd3196a9df1cd122942424a45165d2756c06941beffd2f"} Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.567043 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.566858 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.590935 4792 scope.go:117] "RemoveContainer" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.612415 4792 scope.go:117] "RemoveContainer" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.617716 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.617815 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.617849 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.617898 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.618022 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.618065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmxtz\" (UniqueName: \"kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.618151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd\") pod \"cb02bce2-5353-4048-87f6-204231f09f2d\" (UID: \"cb02bce2-5353-4048-87f6-204231f09f2d\") " Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.619351 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.619432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.629386 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts" (OuterVolumeSpecName: "scripts") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.629460 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz" (OuterVolumeSpecName: "kube-api-access-qmxtz") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "kube-api-access-qmxtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.637885 4792 scope.go:117] "RemoveContainer" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.661737 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.685929 4792 scope.go:117] "RemoveContainer" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.686938 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": container with ID starting with 9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d not found: ID does not exist" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.686970 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} err="failed to get container status \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": rpc error: code = NotFound desc = could not find container \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": container with ID starting with 9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.686992 4792 scope.go:117] "RemoveContainer" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.690854 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": container with ID starting with 8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c not found: ID does not exist" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.690883 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} err="failed to get container status \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": rpc error: code = NotFound desc = could not find container \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": container with ID starting with 8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.690912 4792 scope.go:117] "RemoveContainer" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.693382 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": container with ID starting with 56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23 not found: ID does not exist" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.693436 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} err="failed to get container status \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": rpc error: code = NotFound desc = could not find container \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": container with ID starting with 56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.693452 4792 scope.go:117] "RemoveContainer" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.694436 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": container with ID starting with 15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6 not found: ID does not exist" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.694460 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} err="failed to get container status \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": rpc error: code = NotFound desc = could not find container \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": container with ID starting with 15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.694472 4792 scope.go:117] "RemoveContainer" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.694818 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} err="failed to get container status \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": rpc error: code = NotFound desc = could not find container \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": container with ID starting with 9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.694839 4792 scope.go:117] "RemoveContainer" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695099 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} err="failed to get container status \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": rpc error: code = NotFound desc = could not find container \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": container with ID starting with 8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695121 4792 scope.go:117] "RemoveContainer" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695365 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} err="failed to get container status \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": rpc error: code = NotFound desc = could not find container \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": container with ID starting with 56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695392 4792 scope.go:117] "RemoveContainer" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695656 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} err="failed to get container status \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": rpc error: code = NotFound desc = could not find container \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": container with ID starting with 15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695679 4792 scope.go:117] "RemoveContainer" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695967 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} err="failed to get container status \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": rpc error: code = NotFound desc = could not find container \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": container with ID starting with 9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.695985 4792 scope.go:117] "RemoveContainer" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696223 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} err="failed to get container status \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": rpc error: code = NotFound desc = could not find container \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": container with ID starting with 8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696239 4792 scope.go:117] "RemoveContainer" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696498 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} err="failed to get container status \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": rpc error: code = NotFound desc = could not find container \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": container with ID starting with 56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696527 4792 scope.go:117] "RemoveContainer" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696762 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} err="failed to get container status \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": rpc error: code = NotFound desc = could not find container \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": container with ID starting with 15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.696788 4792 scope.go:117] "RemoveContainer" containerID="9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.697433 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d"} err="failed to get container status \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": rpc error: code = NotFound desc = could not find container \"9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d\": container with ID starting with 9b324b1b843a75fe78b6d08eb4555a32b57d3270af18038bc469e924e739e60d not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.697462 4792 scope.go:117] "RemoveContainer" containerID="8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.697690 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c"} err="failed to get container status \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": rpc error: code = NotFound desc = could not find container \"8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c\": container with ID starting with 8babccedbd206c810277fd1b8cc9ea8fb8352c8a09b49416b6731dc4689c613c not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.697716 4792 scope.go:117] "RemoveContainer" containerID="56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.699460 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23"} err="failed to get container status \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": rpc error: code = NotFound desc = could not find container \"56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23\": container with ID starting with 56a9febf6cd5b9a09a2affe10de05a45e4f2a16d731b822cb9c188af684fcc23 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.699480 4792 scope.go:117] "RemoveContainer" containerID="15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.702839 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6"} err="failed to get container status \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": rpc error: code = NotFound desc = could not find container \"15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6\": container with ID starting with 15e9f807b4dd882d947f465afba30ebeefdfb04670618565a871c983f7ac26b6 not found: ID does not exist" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.722622 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.722723 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.722740 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmxtz\" (UniqueName: \"kubernetes.io/projected/cb02bce2-5353-4048-87f6-204231f09f2d-kube-api-access-qmxtz\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.722754 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb02bce2-5353-4048-87f6-204231f09f2d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.722765 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.756273 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.769816 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data" (OuterVolumeSpecName: "config-data") pod "cb02bce2-5353-4048-87f6-204231f09f2d" (UID: "cb02bce2-5353-4048-87f6-204231f09f2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.825073 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.825282 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb02bce2-5353-4048-87f6-204231f09f2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.904990 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.919867 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.940903 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941386 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-log" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941408 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-log" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941423 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="sg-core" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941429 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="sg-core" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941438 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77f3054-3a84-4e5f-8c60-b5906b353be7" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941446 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77f3054-3a84-4e5f-8c60-b5906b353be7" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941466 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="proxy-httpd" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941471 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="proxy-httpd" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941479 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a55719-97b7-4243-bfa3-e918b61ec76a" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941484 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a55719-97b7-4243-bfa3-e918b61ec76a" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941498 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941503 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941512 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941519 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941531 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0297de14-9244-4cda-93b7-a75b5ac58348" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941539 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0297de14-9244-4cda-93b7-a75b5ac58348" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941549 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704c2346-0609-42f5-89da-db7d8950ea83" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941554 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="704c2346-0609-42f5-89da-db7d8950ea83" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941566 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941572 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941580 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941585 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941610 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941617 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941630 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941637 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-api" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941650 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-notification-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941656 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-notification-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941663 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b826e6-839e-4981-9c0e-1ae295f48f5b" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941668 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b826e6-839e-4981-9c0e-1ae295f48f5b" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: E0216 21:59:53.941681 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-central-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.941687 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-central-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942720 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942735 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b826e6-839e-4981-9c0e-1ae295f48f5b" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942747 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="704c2346-0609-42f5-89da-db7d8950ea83" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942757 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77f3054-3a84-4e5f-8c60-b5906b353be7" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942770 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942776 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0297de14-9244-4cda-93b7-a75b5ac58348" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942786 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="sg-core" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942793 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-central-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942802 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="proxy-httpd" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942813 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" containerName="mariadb-account-create-update" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942824 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-log" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942835 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" containerName="ceilometer-notification-agent" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942848 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4654e37f-1c84-466d-a2a7-ada1474f811c" containerName="placement-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.942855 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a55719-97b7-4243-bfa3-e918b61ec76a" containerName="mariadb-database-create" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.943224 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94eb231-cfd5-48bb-9b0e-4d15ce07695f" containerName="heat-cfnapi" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.943240 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a645f0-cc32-41d9-9309-22cd86985b4f" containerName="heat-api" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.944765 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.948294 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.948697 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 21:59:53 crc kubenswrapper[4792]: I0216 21:59:53.973302 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4nk5\" (UniqueName: \"kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029647 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029708 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.029755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.039108 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb02bce2-5353-4048-87f6-204231f09f2d" path="/var/lib/kubelet/pods/cb02bce2-5353-4048-87f6-204231f09f2d/volumes" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131644 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4nk5\" (UniqueName: \"kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131831 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.131961 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.132003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.132434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.132493 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.135395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.135854 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.136024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.137197 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.150291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4nk5\" (UniqueName: \"kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5\") pod \"ceilometer-0\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.270725 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.526872 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bjbmf"] Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.528312 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.532354 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.532369 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.532993 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gzkq2" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.564752 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bjbmf"] Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.654370 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.654470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.654551 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.654575 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58bb\" (UniqueName: \"kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.756960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.757437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.757479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58bb\" (UniqueName: \"kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.757675 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.764137 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.765264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.766147 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.782953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58bb\" (UniqueName: \"kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb\") pod \"nova-cell0-conductor-db-sync-bjbmf\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.844075 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 21:59:54 crc kubenswrapper[4792]: W0216 21:59:54.844716 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b2b11d_56fb_403e_bd50_28eee88aa2f5.slice/crio-48e10f5bbed1160c6cc2c4c5ac49e1eb3ffa28e44a087e021e9f7cf370d5b927 WatchSource:0}: Error finding container 48e10f5bbed1160c6cc2c4c5ac49e1eb3ffa28e44a087e021e9f7cf370d5b927: Status 404 returned error can't find the container with id 48e10f5bbed1160c6cc2c4c5ac49e1eb3ffa28e44a087e021e9f7cf370d5b927 Feb 16 21:59:54 crc kubenswrapper[4792]: I0216 21:59:54.855729 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.416590 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bjbmf"] Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.632908 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerStarted","Data":"028b32f6520b913ea8298dfdd5f786c7392349fbdc826d480921286f5835206b"} Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.633252 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerStarted","Data":"48e10f5bbed1160c6cc2c4c5ac49e1eb3ffa28e44a087e021e9f7cf370d5b927"} Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.637558 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.637587 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:59:55 crc kubenswrapper[4792]: I0216 21:59:55.637551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" event={"ID":"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a","Type":"ContainerStarted","Data":"972967cb156879240ccf09aab943713934705e247713d0c200814dc912c91326"} Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.665148 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerStarted","Data":"86956197ad05d331cf1caf44e1d6b0ffc78e365f87f030ebbd2543526eb87fe5"} Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.808993 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.809102 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.957561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.957713 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.968684 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 21:59:56 crc kubenswrapper[4792]: I0216 21:59:56.989416 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 21:59:57 crc kubenswrapper[4792]: E0216 21:59:57.426880 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b5ce16_7f9b_48f2_9e59_7c08a88a84f8.slice/crio-ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.700710 4792 generic.go:334] "Generic (PLEG): container finished" podID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerID="ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" exitCode=0 Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.700964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75477f9d95-6ddxt" event={"ID":"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8","Type":"ContainerDied","Data":"ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9"} Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.707844 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerStarted","Data":"a109319d2232875b5ed9a094053b1442c957e794b564cc27e8cbf3ceffa33a43"} Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.831863 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.895087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle\") pod \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.895500 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom\") pod \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.895532 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data\") pod \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.895551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzwpg\" (UniqueName: \"kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg\") pod \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\" (UID: \"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8\") " Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.911118 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg" (OuterVolumeSpecName: "kube-api-access-pzwpg") pod "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" (UID: "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8"). InnerVolumeSpecName "kube-api-access-pzwpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.914894 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" (UID: "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.971944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" (UID: "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.999213 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.999245 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:57 crc kubenswrapper[4792]: I0216 21:59:57.999256 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzwpg\" (UniqueName: \"kubernetes.io/projected/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-kube-api-access-pzwpg\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.024711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data" (OuterVolumeSpecName: "config-data") pod "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" (UID: "62b5ce16-7f9b-48f2-9e59-7c08a88a84f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.101964 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.720141 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75477f9d95-6ddxt" event={"ID":"62b5ce16-7f9b-48f2-9e59-7c08a88a84f8","Type":"ContainerDied","Data":"0af364830efcaf6dcb666ecdc999f9e1531ce1c7d07f9cae23405b251cd7f09c"} Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.720188 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75477f9d95-6ddxt" Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.720475 4792 scope.go:117] "RemoveContainer" containerID="ffd4401f73601b3c9d8331655ee7322799f708ed14f8336135378ff6d73f35b9" Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.795067 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:58 crc kubenswrapper[4792]: I0216 21:59:58.813732 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-75477f9d95-6ddxt"] Feb 16 21:59:59 crc kubenswrapper[4792]: I0216 21:59:59.740948 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerStarted","Data":"5f43ec0740d73701fef1b2e5b3837237c18f8f04f5839b176e65dfb14127c274"} Feb 16 21:59:59 crc kubenswrapper[4792]: I0216 21:59:59.741809 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 21:59:59 crc kubenswrapper[4792]: I0216 21:59:59.765716 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7222343909999998 podStartE2EDuration="6.765697724s" podCreationTimestamp="2026-02-16 21:59:53 +0000 UTC" firstStartedPulling="2026-02-16 21:59:54.847075067 +0000 UTC m=+1327.500353968" lastFinishedPulling="2026-02-16 21:59:58.89053841 +0000 UTC m=+1331.543817301" observedRunningTime="2026-02-16 21:59:59.765090736 +0000 UTC m=+1332.418369647" watchObservedRunningTime="2026-02-16 21:59:59.765697724 +0000 UTC m=+1332.418976615" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.044636 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" path="/var/lib/kubelet/pods/62b5ce16-7f9b-48f2-9e59-7c08a88a84f8/volumes" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.172999 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4"] Feb 16 22:00:00 crc kubenswrapper[4792]: E0216 22:00:00.174290 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerName="heat-engine" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.174318 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerName="heat-engine" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.174557 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b5ce16-7f9b-48f2-9e59-7c08a88a84f8" containerName="heat-engine" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.175820 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.186805 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4"] Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.202209 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.202352 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.273745 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.273841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.273889 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jf5\" (UniqueName: \"kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.378058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.378199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.378286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jf5\" (UniqueName: \"kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.379296 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.386699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.400429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jf5\" (UniqueName: \"kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5\") pod \"collect-profiles-29521320-8zfz4\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:00 crc kubenswrapper[4792]: I0216 22:00:00.546248 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:01 crc kubenswrapper[4792]: E0216 22:00:01.032200 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:01 crc kubenswrapper[4792]: I0216 22:00:01.100440 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4"] Feb 16 22:00:01 crc kubenswrapper[4792]: I0216 22:00:01.532368 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:00:01 crc kubenswrapper[4792]: I0216 22:00:01.532437 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:00:07 crc kubenswrapper[4792]: W0216 22:00:07.287500 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddea9f2da_4123_4e53_a53f_f760412371e5.slice/crio-23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950 WatchSource:0}: Error finding container 23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950: Status 404 returned error can't find the container with id 23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950 Feb 16 22:00:07 crc kubenswrapper[4792]: E0216 22:00:07.592726 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:07 crc kubenswrapper[4792]: I0216 22:00:07.857133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" event={"ID":"dea9f2da-4123-4e53-a53f-f760412371e5","Type":"ContainerStarted","Data":"23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950"} Feb 16 22:00:08 crc kubenswrapper[4792]: I0216 22:00:08.886287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" event={"ID":"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a","Type":"ContainerStarted","Data":"1b355a2a9768678a526868ec53d7fe2551627963c253a1a2e6f4b39661c3cf66"} Feb 16 22:00:08 crc kubenswrapper[4792]: I0216 22:00:08.893644 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" event={"ID":"dea9f2da-4123-4e53-a53f-f760412371e5","Type":"ContainerStarted","Data":"78bd8eccfde02c14fc4ff2962cf71485078c080333df8a80d2d4dbde974c22cc"} Feb 16 22:00:08 crc kubenswrapper[4792]: I0216 22:00:08.920025 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" podStartSLOduration=2.238670529 podStartE2EDuration="14.920008096s" podCreationTimestamp="2026-02-16 21:59:54 +0000 UTC" firstStartedPulling="2026-02-16 21:59:55.423176461 +0000 UTC m=+1328.076455352" lastFinishedPulling="2026-02-16 22:00:08.104514028 +0000 UTC m=+1340.757792919" observedRunningTime="2026-02-16 22:00:08.905282987 +0000 UTC m=+1341.558561888" watchObservedRunningTime="2026-02-16 22:00:08.920008096 +0000 UTC m=+1341.573286987" Feb 16 22:00:08 crc kubenswrapper[4792]: I0216 22:00:08.929933 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" podStartSLOduration=8.929912578 podStartE2EDuration="8.929912578s" podCreationTimestamp="2026-02-16 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:08.923794754 +0000 UTC m=+1341.577073645" watchObservedRunningTime="2026-02-16 22:00:08.929912578 +0000 UTC m=+1341.583191469" Feb 16 22:00:09 crc kubenswrapper[4792]: I0216 22:00:09.912357 4792 generic.go:334] "Generic (PLEG): container finished" podID="dea9f2da-4123-4e53-a53f-f760412371e5" containerID="78bd8eccfde02c14fc4ff2962cf71485078c080333df8a80d2d4dbde974c22cc" exitCode=0 Feb 16 22:00:09 crc kubenswrapper[4792]: I0216 22:00:09.914920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" event={"ID":"dea9f2da-4123-4e53-a53f-f760412371e5","Type":"ContainerDied","Data":"78bd8eccfde02c14fc4ff2962cf71485078c080333df8a80d2d4dbde974c22cc"} Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.341455 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.426980 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume\") pod \"dea9f2da-4123-4e53-a53f-f760412371e5\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.427081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume\") pod \"dea9f2da-4123-4e53-a53f-f760412371e5\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.427137 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79jf5\" (UniqueName: \"kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5\") pod \"dea9f2da-4123-4e53-a53f-f760412371e5\" (UID: \"dea9f2da-4123-4e53-a53f-f760412371e5\") " Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.427911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "dea9f2da-4123-4e53-a53f-f760412371e5" (UID: "dea9f2da-4123-4e53-a53f-f760412371e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.439911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5" (OuterVolumeSpecName: "kube-api-access-79jf5") pod "dea9f2da-4123-4e53-a53f-f760412371e5" (UID: "dea9f2da-4123-4e53-a53f-f760412371e5"). InnerVolumeSpecName "kube-api-access-79jf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.441718 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dea9f2da-4123-4e53-a53f-f760412371e5" (UID: "dea9f2da-4123-4e53-a53f-f760412371e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.531998 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea9f2da-4123-4e53-a53f-f760412371e5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.532034 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea9f2da-4123-4e53-a53f-f760412371e5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.532044 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79jf5\" (UniqueName: \"kubernetes.io/projected/dea9f2da-4123-4e53-a53f-f760412371e5-kube-api-access-79jf5\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.935137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" event={"ID":"dea9f2da-4123-4e53-a53f-f760412371e5","Type":"ContainerDied","Data":"23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950"} Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.935357 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23488e5926e907acb8089eb5c6c994386ec93d977f4665d936870faa025bc950" Feb 16 22:00:11 crc kubenswrapper[4792]: I0216 22:00:11.935236 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.471134 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-qcr7g"] Feb 16 22:00:14 crc kubenswrapper[4792]: E0216 22:00:14.473536 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea9f2da-4123-4e53-a53f-f760412371e5" containerName="collect-profiles" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.473569 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea9f2da-4123-4e53-a53f-f760412371e5" containerName="collect-profiles" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.473832 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea9f2da-4123-4e53-a53f-f760412371e5" containerName="collect-profiles" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.474685 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.486760 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-3730-account-create-update-m7svz"] Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.488211 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.490084 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.513489 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qcr7g"] Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.535783 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3730-account-create-update-m7svz"] Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.606627 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs77n\" (UniqueName: \"kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.606820 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.606943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.607008 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbjl\" (UniqueName: \"kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.708898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.709038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.709085 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbjl\" (UniqueName: \"kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.709143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs77n\" (UniqueName: \"kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.709891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.709911 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.747297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbjl\" (UniqueName: \"kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl\") pod \"aodh-3730-account-create-update-m7svz\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.800498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs77n\" (UniqueName: \"kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n\") pod \"aodh-db-create-qcr7g\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.804327 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:14 crc kubenswrapper[4792]: I0216 22:00:14.817410 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.246405 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.247038 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-central-agent" containerID="cri-o://028b32f6520b913ea8298dfdd5f786c7392349fbdc826d480921286f5835206b" gracePeriod=30 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.247122 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="sg-core" containerID="cri-o://a109319d2232875b5ed9a094053b1442c957e794b564cc27e8cbf3ceffa33a43" gracePeriod=30 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.247152 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="proxy-httpd" containerID="cri-o://5f43ec0740d73701fef1b2e5b3837237c18f8f04f5839b176e65dfb14127c274" gracePeriod=30 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.247116 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-notification-agent" containerID="cri-o://86956197ad05d331cf1caf44e1d6b0ffc78e365f87f030ebbd2543526eb87fe5" gracePeriod=30 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.269400 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.233:3000/\": EOF" Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.467538 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-3730-account-create-update-m7svz"] Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.482019 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qcr7g"] Feb 16 22:00:15 crc kubenswrapper[4792]: W0216 22:00:15.493295 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ee8442a_1298_42d2_ab10_ac48aabf89ae.slice/crio-e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd WatchSource:0}: Error finding container e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd: Status 404 returned error can't find the container with id e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.985172 4792 generic.go:334] "Generic (PLEG): container finished" podID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerID="5f43ec0740d73701fef1b2e5b3837237c18f8f04f5839b176e65dfb14127c274" exitCode=0 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.985204 4792 generic.go:334] "Generic (PLEG): container finished" podID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerID="a109319d2232875b5ed9a094053b1442c957e794b564cc27e8cbf3ceffa33a43" exitCode=2 Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.985237 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerDied","Data":"5f43ec0740d73701fef1b2e5b3837237c18f8f04f5839b176e65dfb14127c274"} Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.985275 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerDied","Data":"a109319d2232875b5ed9a094053b1442c957e794b564cc27e8cbf3ceffa33a43"} Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.986865 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3730-account-create-update-m7svz" event={"ID":"4ee8442a-1298-42d2-ab10-ac48aabf89ae","Type":"ContainerStarted","Data":"e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd"} Feb 16 22:00:15 crc kubenswrapper[4792]: I0216 22:00:15.988214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcr7g" event={"ID":"fa786547-92a7-41b6-9da0-98b1492e513f","Type":"ContainerStarted","Data":"3ed78dddc739d1ef2a08542111227bcd564e0ff012e865605704cd614105a553"} Feb 16 22:00:16 crc kubenswrapper[4792]: E0216 22:00:16.292838 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.002074 4792 generic.go:334] "Generic (PLEG): container finished" podID="fa786547-92a7-41b6-9da0-98b1492e513f" containerID="5193278bd80b544b0b0bc7373ff1a71db8c2cd2c7b9d25ac39cc5aca3a17f631" exitCode=0 Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.002439 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcr7g" event={"ID":"fa786547-92a7-41b6-9da0-98b1492e513f","Type":"ContainerDied","Data":"5193278bd80b544b0b0bc7373ff1a71db8c2cd2c7b9d25ac39cc5aca3a17f631"} Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.010942 4792 generic.go:334] "Generic (PLEG): container finished" podID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerID="86956197ad05d331cf1caf44e1d6b0ffc78e365f87f030ebbd2543526eb87fe5" exitCode=0 Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.010985 4792 generic.go:334] "Generic (PLEG): container finished" podID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerID="028b32f6520b913ea8298dfdd5f786c7392349fbdc826d480921286f5835206b" exitCode=0 Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.011027 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerDied","Data":"86956197ad05d331cf1caf44e1d6b0ffc78e365f87f030ebbd2543526eb87fe5"} Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.011068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerDied","Data":"028b32f6520b913ea8298dfdd5f786c7392349fbdc826d480921286f5835206b"} Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.014886 4792 generic.go:334] "Generic (PLEG): container finished" podID="4ee8442a-1298-42d2-ab10-ac48aabf89ae" containerID="ad1e8f9f13b5ec4738719bfca21c43ab265cb3250c9dd389438b25fbf39ba7de" exitCode=0 Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.014931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3730-account-create-update-m7svz" event={"ID":"4ee8442a-1298-42d2-ab10-ac48aabf89ae","Type":"ContainerDied","Data":"ad1e8f9f13b5ec4738719bfca21c43ab265cb3250c9dd389438b25fbf39ba7de"} Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.339037 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.474804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.474918 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.475180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.475821 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.475869 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.475984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.476017 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.476099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4nk5\" (UniqueName: \"kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5\") pod \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\" (UID: \"87b2b11d-56fb-403e-bd50-28eee88aa2f5\") " Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.476860 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.476097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.498243 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts" (OuterVolumeSpecName: "scripts") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.499912 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5" (OuterVolumeSpecName: "kube-api-access-q4nk5") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "kube-api-access-q4nk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.516724 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.578720 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4nk5\" (UniqueName: \"kubernetes.io/projected/87b2b11d-56fb-403e-bd50-28eee88aa2f5-kube-api-access-q4nk5\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.578766 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.578780 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b2b11d-56fb-403e-bd50-28eee88aa2f5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.578790 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.595797 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.633759 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data" (OuterVolumeSpecName: "config-data") pod "87b2b11d-56fb-403e-bd50-28eee88aa2f5" (UID: "87b2b11d-56fb-403e-bd50-28eee88aa2f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:17 crc kubenswrapper[4792]: E0216 22:00:17.650934 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.680569 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:17 crc kubenswrapper[4792]: I0216 22:00:17.681077 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b2b11d-56fb-403e-bd50-28eee88aa2f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.033746 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.038061 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b2b11d-56fb-403e-bd50-28eee88aa2f5","Type":"ContainerDied","Data":"48e10f5bbed1160c6cc2c4c5ac49e1eb3ffa28e44a087e021e9f7cf370d5b927"} Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.038105 4792 scope.go:117] "RemoveContainer" containerID="5f43ec0740d73701fef1b2e5b3837237c18f8f04f5839b176e65dfb14127c274" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.080029 4792 scope.go:117] "RemoveContainer" containerID="a109319d2232875b5ed9a094053b1442c957e794b564cc27e8cbf3ceffa33a43" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.108853 4792 scope.go:117] "RemoveContainer" containerID="86956197ad05d331cf1caf44e1d6b0ffc78e365f87f030ebbd2543526eb87fe5" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.152241 4792 scope.go:117] "RemoveContainer" containerID="028b32f6520b913ea8298dfdd5f786c7392349fbdc826d480921286f5835206b" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.625822 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.628847 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.701300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs77n\" (UniqueName: \"kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n\") pod \"fa786547-92a7-41b6-9da0-98b1492e513f\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.701402 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts\") pod \"fa786547-92a7-41b6-9da0-98b1492e513f\" (UID: \"fa786547-92a7-41b6-9da0-98b1492e513f\") " Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.701434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts\") pod \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.701649 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frbjl\" (UniqueName: \"kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl\") pod \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\" (UID: \"4ee8442a-1298-42d2-ab10-ac48aabf89ae\") " Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.702286 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ee8442a-1298-42d2-ab10-ac48aabf89ae" (UID: "4ee8442a-1298-42d2-ab10-ac48aabf89ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.703130 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa786547-92a7-41b6-9da0-98b1492e513f" (UID: "fa786547-92a7-41b6-9da0-98b1492e513f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.703231 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee8442a-1298-42d2-ab10-ac48aabf89ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.710829 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n" (OuterVolumeSpecName: "kube-api-access-hs77n") pod "fa786547-92a7-41b6-9da0-98b1492e513f" (UID: "fa786547-92a7-41b6-9da0-98b1492e513f"). InnerVolumeSpecName "kube-api-access-hs77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.711053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl" (OuterVolumeSpecName: "kube-api-access-frbjl") pod "4ee8442a-1298-42d2-ab10-ac48aabf89ae" (UID: "4ee8442a-1298-42d2-ab10-ac48aabf89ae"). InnerVolumeSpecName "kube-api-access-frbjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.805794 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frbjl\" (UniqueName: \"kubernetes.io/projected/4ee8442a-1298-42d2-ab10-ac48aabf89ae-kube-api-access-frbjl\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.806428 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs77n\" (UniqueName: \"kubernetes.io/projected/fa786547-92a7-41b6-9da0-98b1492e513f-kube-api-access-hs77n\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:18 crc kubenswrapper[4792]: I0216 22:00:18.806536 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa786547-92a7-41b6-9da0-98b1492e513f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.041855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-3730-account-create-update-m7svz" event={"ID":"4ee8442a-1298-42d2-ab10-ac48aabf89ae","Type":"ContainerDied","Data":"e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd"} Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.042946 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7981a7fce1533873281c008e9b143cb86766c494f65f75e1b3d5f28ae7e33fd" Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.041888 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-3730-account-create-update-m7svz" Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.043811 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcr7g" Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.043813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcr7g" event={"ID":"fa786547-92a7-41b6-9da0-98b1492e513f","Type":"ContainerDied","Data":"3ed78dddc739d1ef2a08542111227bcd564e0ff012e865605704cd614105a553"} Feb 16 22:00:19 crc kubenswrapper[4792]: I0216 22:00:19.043877 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed78dddc739d1ef2a08542111227bcd564e0ff012e865605704cd614105a553" Feb 16 22:00:23 crc kubenswrapper[4792]: I0216 22:00:23.090172 4792 generic.go:334] "Generic (PLEG): container finished" podID="dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" containerID="1b355a2a9768678a526868ec53d7fe2551627963c253a1a2e6f4b39661c3cf66" exitCode=0 Feb 16 22:00:23 crc kubenswrapper[4792]: I0216 22:00:23.090812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" event={"ID":"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a","Type":"ContainerDied","Data":"1b355a2a9768678a526868ec53d7fe2551627963c253a1a2e6f4b39661c3cf66"} Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.577206 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.635891 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts\") pod \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.635973 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data\") pod \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.636008 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58bb\" (UniqueName: \"kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb\") pod \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.636290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle\") pod \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\" (UID: \"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a\") " Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.642863 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts" (OuterVolumeSpecName: "scripts") pod "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" (UID: "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.647971 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb" (OuterVolumeSpecName: "kube-api-access-b58bb") pod "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" (UID: "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a"). InnerVolumeSpecName "kube-api-access-b58bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.677476 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" (UID: "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.681498 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data" (OuterVolumeSpecName: "config-data") pod "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" (UID: "dba5cbbf-97a4-4785-9927-5e40e2b5fd7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.738450 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.738487 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.738496 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:24 crc kubenswrapper[4792]: I0216 22:00:24.738506 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b58bb\" (UniqueName: \"kubernetes.io/projected/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a-kube-api-access-b58bb\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038172 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-4zbd8"] Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038805 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-central-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038830 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-central-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038841 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" containerName="nova-cell0-conductor-db-sync" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038851 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" containerName="nova-cell0-conductor-db-sync" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038875 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee8442a-1298-42d2-ab10-ac48aabf89ae" containerName="mariadb-account-create-update" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038884 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee8442a-1298-42d2-ab10-ac48aabf89ae" containerName="mariadb-account-create-update" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038911 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa786547-92a7-41b6-9da0-98b1492e513f" containerName="mariadb-database-create" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038919 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa786547-92a7-41b6-9da0-98b1492e513f" containerName="mariadb-database-create" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038943 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="proxy-httpd" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038951 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="proxy-httpd" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038963 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-notification-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038970 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-notification-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: E0216 22:00:25.038984 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="sg-core" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.038991 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="sg-core" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039240 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="sg-core" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039254 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="proxy-httpd" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039270 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-central-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039290 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa786547-92a7-41b6-9da0-98b1492e513f" containerName="mariadb-database-create" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039320 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee8442a-1298-42d2-ab10-ac48aabf89ae" containerName="mariadb-account-create-update" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039332 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" containerName="ceilometer-notification-agent" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.039347 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" containerName="nova-cell0-conductor-db-sync" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.040297 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.048263 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.049485 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.057353 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9gfcj" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.058037 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.070674 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4zbd8"] Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.141137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" event={"ID":"dba5cbbf-97a4-4785-9927-5e40e2b5fd7a","Type":"ContainerDied","Data":"972967cb156879240ccf09aab943713934705e247713d0c200814dc912c91326"} Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.141179 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="972967cb156879240ccf09aab943713934705e247713d0c200814dc912c91326" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.141213 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bjbmf" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.145452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.145517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.145925 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.146703 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7tn\" (UniqueName: \"kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.249221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.249373 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv7tn\" (UniqueName: \"kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.249441 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.249480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.254811 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.258878 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.259150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.276354 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv7tn\" (UniqueName: \"kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn\") pod \"aodh-db-sync-4zbd8\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.276358 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.277959 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.280956 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.281235 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gzkq2" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.292053 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.351067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.351111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.351191 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7977\" (UniqueName: \"kubernetes.io/projected/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-kube-api-access-g7977\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.362345 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.456257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.456494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.456704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7977\" (UniqueName: \"kubernetes.io/projected/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-kube-api-access-g7977\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.460486 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.471925 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.477209 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7977\" (UniqueName: \"kubernetes.io/projected/2c87f02a-122e-4d95-8c0f-f4e8a17450a3-kube-api-access-g7977\") pod \"nova-cell0-conductor-0\" (UID: \"2c87f02a-122e-4d95-8c0f-f4e8a17450a3\") " pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.642357 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:25 crc kubenswrapper[4792]: I0216 22:00:25.942445 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4zbd8"] Feb 16 22:00:26 crc kubenswrapper[4792]: I0216 22:00:26.152337 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4zbd8" event={"ID":"daa36328-3bf1-4306-ba33-69217b14a2a5","Type":"ContainerStarted","Data":"ff075fde35c04c1fad74b0c9de2263061b50450e6b240547b4b0fceffa108093"} Feb 16 22:00:26 crc kubenswrapper[4792]: I0216 22:00:26.205451 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 22:00:27 crc kubenswrapper[4792]: I0216 22:00:27.164058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2c87f02a-122e-4d95-8c0f-f4e8a17450a3","Type":"ContainerStarted","Data":"c439e35fbd48101e27f31d9d958cd6ea9b01b613f953cb668791c08b6789f3a7"} Feb 16 22:00:27 crc kubenswrapper[4792]: I0216 22:00:27.164320 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:27 crc kubenswrapper[4792]: I0216 22:00:27.164330 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2c87f02a-122e-4d95-8c0f-f4e8a17450a3","Type":"ContainerStarted","Data":"6973ef6ab49e32e9deec276b82981aaa47fb29c499b4422ecc1030e37a85bbc5"} Feb 16 22:00:27 crc kubenswrapper[4792]: I0216 22:00:27.184127 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.184105575 podStartE2EDuration="2.184105575s" podCreationTimestamp="2026-02-16 22:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:27.179636968 +0000 UTC m=+1359.832915869" watchObservedRunningTime="2026-02-16 22:00:27.184105575 +0000 UTC m=+1359.837384466" Feb 16 22:00:27 crc kubenswrapper[4792]: E0216 22:00:27.960477 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:31 crc kubenswrapper[4792]: E0216 22:00:31.028314 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.211863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4zbd8" event={"ID":"daa36328-3bf1-4306-ba33-69217b14a2a5","Type":"ContainerStarted","Data":"07e12e812f39afd826e3f07afb57b98715b747012433c2861430d4e813e455c8"} Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.232682 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-4zbd8" podStartSLOduration=1.28656259 podStartE2EDuration="6.232667149s" podCreationTimestamp="2026-02-16 22:00:25 +0000 UTC" firstStartedPulling="2026-02-16 22:00:25.952556162 +0000 UTC m=+1358.605835053" lastFinishedPulling="2026-02-16 22:00:30.898660721 +0000 UTC m=+1363.551939612" observedRunningTime="2026-02-16 22:00:31.225973809 +0000 UTC m=+1363.879252700" watchObservedRunningTime="2026-02-16 22:00:31.232667149 +0000 UTC m=+1363.885946040" Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.532532 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.532616 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.532658 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.533465 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:00:31 crc kubenswrapper[4792]: I0216 22:00:31.533529 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4" gracePeriod=600 Feb 16 22:00:32 crc kubenswrapper[4792]: I0216 22:00:32.232853 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4" exitCode=0 Feb 16 22:00:32 crc kubenswrapper[4792]: I0216 22:00:32.233115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4"} Feb 16 22:00:32 crc kubenswrapper[4792]: I0216 22:00:32.233225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab"} Feb 16 22:00:32 crc kubenswrapper[4792]: I0216 22:00:32.233258 4792 scope.go:117] "RemoveContainer" containerID="4a0f6c100b91a3d62bdc91a86204ff35001f317f565e857fd70943216f5773d9" Feb 16 22:00:34 crc kubenswrapper[4792]: I0216 22:00:34.264711 4792 generic.go:334] "Generic (PLEG): container finished" podID="daa36328-3bf1-4306-ba33-69217b14a2a5" containerID="07e12e812f39afd826e3f07afb57b98715b747012433c2861430d4e813e455c8" exitCode=0 Feb 16 22:00:34 crc kubenswrapper[4792]: I0216 22:00:34.264800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4zbd8" event={"ID":"daa36328-3bf1-4306-ba33-69217b14a2a5","Type":"ContainerDied","Data":"07e12e812f39afd826e3f07afb57b98715b747012433c2861430d4e813e455c8"} Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.691766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.724511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.824352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv7tn\" (UniqueName: \"kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn\") pod \"daa36328-3bf1-4306-ba33-69217b14a2a5\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.824816 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data\") pod \"daa36328-3bf1-4306-ba33-69217b14a2a5\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.824864 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle\") pod \"daa36328-3bf1-4306-ba33-69217b14a2a5\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.824920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts\") pod \"daa36328-3bf1-4306-ba33-69217b14a2a5\" (UID: \"daa36328-3bf1-4306-ba33-69217b14a2a5\") " Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.830225 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts" (OuterVolumeSpecName: "scripts") pod "daa36328-3bf1-4306-ba33-69217b14a2a5" (UID: "daa36328-3bf1-4306-ba33-69217b14a2a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.841119 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn" (OuterVolumeSpecName: "kube-api-access-jv7tn") pod "daa36328-3bf1-4306-ba33-69217b14a2a5" (UID: "daa36328-3bf1-4306-ba33-69217b14a2a5"). InnerVolumeSpecName "kube-api-access-jv7tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.862497 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data" (OuterVolumeSpecName: "config-data") pod "daa36328-3bf1-4306-ba33-69217b14a2a5" (UID: "daa36328-3bf1-4306-ba33-69217b14a2a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.863228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daa36328-3bf1-4306-ba33-69217b14a2a5" (UID: "daa36328-3bf1-4306-ba33-69217b14a2a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.927335 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv7tn\" (UniqueName: \"kubernetes.io/projected/daa36328-3bf1-4306-ba33-69217b14a2a5-kube-api-access-jv7tn\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.927378 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.927391 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:35 crc kubenswrapper[4792]: I0216 22:00:35.927401 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa36328-3bf1-4306-ba33-69217b14a2a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.214169 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pgrtz"] Feb 16 22:00:36 crc kubenswrapper[4792]: E0216 22:00:36.214699 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa36328-3bf1-4306-ba33-69217b14a2a5" containerName="aodh-db-sync" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.214715 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa36328-3bf1-4306-ba33-69217b14a2a5" containerName="aodh-db-sync" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.214962 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa36328-3bf1-4306-ba33-69217b14a2a5" containerName="aodh-db-sync" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.215702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.218142 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.218230 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.225765 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgrtz"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.291965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4zbd8" event={"ID":"daa36328-3bf1-4306-ba33-69217b14a2a5","Type":"ContainerDied","Data":"ff075fde35c04c1fad74b0c9de2263061b50450e6b240547b4b0fceffa108093"} Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.292006 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff075fde35c04c1fad74b0c9de2263061b50450e6b240547b4b0fceffa108093" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.292042 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4zbd8" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.337418 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8zt\" (UniqueName: \"kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.337474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.338052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.338255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.411834 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.413852 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.420810 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.427054 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.439968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.440054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.440127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8zt\" (UniqueName: \"kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.440156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.446010 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.447291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.451188 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.496506 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8zt\" (UniqueName: \"kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt\") pod \"nova-cell0-cell-mapping-pgrtz\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.538092 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.542178 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.542271 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.542344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.542364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vpv\" (UniqueName: \"kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.542526 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.555174 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.559373 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.574197 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.645999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646090 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vpv\" (UniqueName: \"kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646161 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtc2\" (UniqueName: \"kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.646362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.648159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.657269 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.670253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.694952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vpv\" (UniqueName: \"kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv\") pod \"nova-api-0\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.737456 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.751143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.751227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.751282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtc2\" (UniqueName: \"kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.752153 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.757301 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.765272 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.786042 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9gfcj" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.787136 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.787251 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.787377 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.788879 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.799895 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.800895 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.802926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.811719 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtc2\" (UniqueName: \"kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2\") pod \"nova-scheduler-0\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.834660 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.834972 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.860989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861264 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861392 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftx4\" (UniqueName: \"kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861431 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861445 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kbq\" (UniqueName: \"kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjpjt\" (UniqueName: \"kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.861661 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.893685 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.916007 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7kbq\" (UniqueName: \"kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964148 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjpjt\" (UniqueName: \"kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964400 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964439 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964475 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftx4\" (UniqueName: \"kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964499 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.964928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.975791 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.976731 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.980320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.981131 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.982102 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.983404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:36 crc kubenswrapper[4792]: I0216 22:00:36.989285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.003658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.017377 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.030417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjpjt\" (UniqueName: \"kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt\") pod \"aodh-0\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " pod="openstack/aodh-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.057865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftx4\" (UniqueName: \"kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4\") pod \"nova-metadata-0\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " pod="openstack/nova-metadata-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.070888 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7kbq\" (UniqueName: \"kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.206117 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.234567 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.236887 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.238352 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307374 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9vq\" (UniqueName: \"kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307490 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307518 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.307619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.327288 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.359274 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450498 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450709 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9vq\" (UniqueName: \"kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.450800 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.451703 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.452204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.452714 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.453191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.457778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.492479 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgrtz"] Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.509766 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9vq\" (UniqueName: \"kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq\") pod \"dnsmasq-dns-7877d89589-dfq4t\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.573138 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:37 crc kubenswrapper[4792]: I0216 22:00:37.791780 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:38 crc kubenswrapper[4792]: W0216 22:00:38.053630 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67d0476b_27f8_4543_914d_fddf5b2960b7.slice/crio-db2bddb845b9d26e14488b8d3b5e03268cca471d98d8335bb26068f2d1c8bc2e WatchSource:0}: Error finding container db2bddb845b9d26e14488b8d3b5e03268cca471d98d8335bb26068f2d1c8bc2e: Status 404 returned error can't find the container with id db2bddb845b9d26e14488b8d3b5e03268cca471d98d8335bb26068f2d1c8bc2e Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.064094 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:38 crc kubenswrapper[4792]: W0216 22:00:38.134046 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a67b810_5101_414f_a0ed_a90a5ffc30af.slice/crio-0edd57667e534f5b0d4dffce58f016832bb73696323a28b2c645bdddbdec7e4b WatchSource:0}: Error finding container 0edd57667e534f5b0d4dffce58f016832bb73696323a28b2c645bdddbdec7e4b: Status 404 returned error can't find the container with id 0edd57667e534f5b0d4dffce58f016832bb73696323a28b2c645bdddbdec7e4b Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.196366 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.211476 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.372042 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.403114 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wjlz9"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.404996 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.408175 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.408319 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.433532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"46b0b16f-b82e-4daf-841e-6d8aa64e35e0","Type":"ContainerStarted","Data":"e0827ed980a46616ca199b9bacb72ca886c19d22b342ce483c37299954a511c7"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.434828 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wjlz9"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.450352 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgrtz" event={"ID":"2a1a09bd-f9f3-4fd9-89a8-c11010239591","Type":"ContainerStarted","Data":"d6a451fdd95bfe0e604149b6a8587432a2f6bf66af2f3c037b56076fb3a3343e"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.450388 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgrtz" event={"ID":"2a1a09bd-f9f3-4fd9-89a8-c11010239591","Type":"ContainerStarted","Data":"9abd4f2e5246bd8402326c87b7a1e1ca48e22659c810f6128899b140f75261b4"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.456473 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerStarted","Data":"c9dcc7ad5c0cc4ac92945d585fed2ef7a4be09edc9f81ccf6d10b383a02fc909"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.475799 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67d0476b-27f8-4543-914d-fddf5b2960b7","Type":"ContainerStarted","Data":"db2bddb845b9d26e14488b8d3b5e03268cca471d98d8335bb26068f2d1c8bc2e"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.483584 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pgrtz" podStartSLOduration=2.4835592220000002 podStartE2EDuration="2.483559222s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:38.47471401 +0000 UTC m=+1371.127992901" watchObservedRunningTime="2026-02-16 22:00:38.483559222 +0000 UTC m=+1371.136838113" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.488700 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.488775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.489059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tkn\" (UniqueName: \"kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.489199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.489726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerStarted","Data":"0edd57667e534f5b0d4dffce58f016832bb73696323a28b2c645bdddbdec7e4b"} Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.492452 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerStarted","Data":"b46e3ac4e26d164b29b1ef5bd93dc0007a620f3602b51e894dac6558c8b45ae9"} Feb 16 22:00:38 crc kubenswrapper[4792]: E0216 22:00:38.544984 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.591285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.591798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.591840 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.592019 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tkn\" (UniqueName: \"kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.598930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.605652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.611191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.643236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tkn\" (UniqueName: \"kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn\") pod \"nova-cell1-conductor-db-sync-wjlz9\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.681660 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:00:38 crc kubenswrapper[4792]: I0216 22:00:38.746447 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:39 crc kubenswrapper[4792]: W0216 22:00:39.373476 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aafdfa_5637_4a23_acd9_48d520e0d082.slice/crio-b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd WatchSource:0}: Error finding container b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd: Status 404 returned error can't find the container with id b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.387581 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wjlz9"] Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.513160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerStarted","Data":"6e41209f855a831fb7b9607b66ba69ac6846bc3abbbdd1299adf3a5f172ebf84"} Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.518352 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" event={"ID":"a7aafdfa-5637-4a23-acd9-48d520e0d082","Type":"ContainerStarted","Data":"b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd"} Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.521441 4792 generic.go:334] "Generic (PLEG): container finished" podID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerID="af64f879f76a980a9f21779c8ffdd63dcdcd715bd132c592285fea39843a1a0e" exitCode=0 Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.521585 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" event={"ID":"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1","Type":"ContainerDied","Data":"af64f879f76a980a9f21779c8ffdd63dcdcd715bd132c592285fea39843a1a0e"} Feb 16 22:00:39 crc kubenswrapper[4792]: I0216 22:00:39.521639 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" event={"ID":"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1","Type":"ContainerStarted","Data":"65f54c465784dfbfc959f95d255d8ba251f3741801b15c5e9ab5df7f6f3f45d8"} Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.493364 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.508212 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.549926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" event={"ID":"a7aafdfa-5637-4a23-acd9-48d520e0d082","Type":"ContainerStarted","Data":"74aa7947b36e9def994c0f46e36b3dbbbf8d5597b5094129f133b66d1702aa79"} Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.561790 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" event={"ID":"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1","Type":"ContainerStarted","Data":"39734c6f8f55659d7c9cb021060fa9f4fe423fa05fcaf185cd1b4ebf0ecfb6af"} Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.562719 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.582973 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" podStartSLOduration=2.582952982 podStartE2EDuration="2.582952982s" podCreationTimestamp="2026-02-16 22:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:40.568952446 +0000 UTC m=+1373.222231337" watchObservedRunningTime="2026-02-16 22:00:40.582952982 +0000 UTC m=+1373.236231873" Feb 16 22:00:40 crc kubenswrapper[4792]: I0216 22:00:40.609358 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" podStartSLOduration=3.609332906 podStartE2EDuration="3.609332906s" podCreationTimestamp="2026-02-16 22:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:40.588361013 +0000 UTC m=+1373.241639914" watchObservedRunningTime="2026-02-16 22:00:40.609332906 +0000 UTC m=+1373.262611797" Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.629744 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67d0476b-27f8-4543-914d-fddf5b2960b7","Type":"ContainerStarted","Data":"54711b9aae472a7c9579ab3e7649d97a77586b561ddc0ca541ce64cda4c890a9"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.632564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerStarted","Data":"037ce3866246dc2e74a66254f31bf22adb9eac08da90d692e4af04a0e5ff8c03"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.634858 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerStarted","Data":"6f5fafff1a752203d09edc97d579187d40fdcd0b9d6d0c87c811ad976f4cc1a2"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.634905 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerStarted","Data":"ae2f7f6f5fc3d95a7a343de73fdcf66c0e1b71d1355510b301735a2ef55fe36a"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.636655 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"46b0b16f-b82e-4daf-841e-6d8aa64e35e0","Type":"ContainerStarted","Data":"2f0db2c562133693caa8111d5ac50ef75ec1c3c2b171fd2a807ee3a40d9ea9b0"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.636780 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2f0db2c562133693caa8111d5ac50ef75ec1c3c2b171fd2a807ee3a40d9ea9b0" gracePeriod=30 Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.639798 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerStarted","Data":"e1a001bdd889500b588eb1b797e9317fff6a976c6834cb04dd76892a3f9e0f80"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.639834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerStarted","Data":"210f8a4c954205057af3fdeb1521ea040821c44ba58ac9d42deb19082122ec2b"} Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.639999 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-metadata" containerID="cri-o://e1a001bdd889500b588eb1b797e9317fff6a976c6834cb04dd76892a3f9e0f80" gracePeriod=30 Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.639961 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-log" containerID="cri-o://210f8a4c954205057af3fdeb1521ea040821c44ba58ac9d42deb19082122ec2b" gracePeriod=30 Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.667850 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.13617361 podStartE2EDuration="8.667827144s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="2026-02-16 22:00:38.098709894 +0000 UTC m=+1370.751988775" lastFinishedPulling="2026-02-16 22:00:43.630363418 +0000 UTC m=+1376.283642309" observedRunningTime="2026-02-16 22:00:44.66231294 +0000 UTC m=+1377.315591831" watchObservedRunningTime="2026-02-16 22:00:44.667827144 +0000 UTC m=+1377.321106045" Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.726585 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.468690142 podStartE2EDuration="8.72656005s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="2026-02-16 22:00:38.386827587 +0000 UTC m=+1371.040106478" lastFinishedPulling="2026-02-16 22:00:43.644697505 +0000 UTC m=+1376.297976386" observedRunningTime="2026-02-16 22:00:44.697890978 +0000 UTC m=+1377.351169889" watchObservedRunningTime="2026-02-16 22:00:44.72656005 +0000 UTC m=+1377.379838941" Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.756627 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.2921607760000002 podStartE2EDuration="8.756582363s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="2026-02-16 22:00:38.176101484 +0000 UTC m=+1370.829380375" lastFinishedPulling="2026-02-16 22:00:43.640523071 +0000 UTC m=+1376.293801962" observedRunningTime="2026-02-16 22:00:44.711938226 +0000 UTC m=+1377.365217117" watchObservedRunningTime="2026-02-16 22:00:44.756582363 +0000 UTC m=+1377.409861264" Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.765365 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9853774680000003 podStartE2EDuration="8.765338873s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="2026-02-16 22:00:37.848842377 +0000 UTC m=+1370.502121278" lastFinishedPulling="2026-02-16 22:00:43.628803792 +0000 UTC m=+1376.282082683" observedRunningTime="2026-02-16 22:00:44.740141614 +0000 UTC m=+1377.393420515" watchObservedRunningTime="2026-02-16 22:00:44.765338873 +0000 UTC m=+1377.418617764" Feb 16 22:00:44 crc kubenswrapper[4792]: I0216 22:00:44.802225 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 22:00:45 crc kubenswrapper[4792]: I0216 22:00:45.663321 4792 generic.go:334] "Generic (PLEG): container finished" podID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerID="210f8a4c954205057af3fdeb1521ea040821c44ba58ac9d42deb19082122ec2b" exitCode=143 Feb 16 22:00:45 crc kubenswrapper[4792]: I0216 22:00:45.663409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerDied","Data":"210f8a4c954205057af3fdeb1521ea040821c44ba58ac9d42deb19082122ec2b"} Feb 16 22:00:46 crc kubenswrapper[4792]: E0216 22:00:46.275550 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b826e6_839e_4981_9c0e_1ae295f48f5b.slice/crio-860c692dccf804689614946c8a9c09cf69d958c6e2b57149ef3632cd65ad932c\": RecentStats: unable to find data in memory cache]" Feb 16 22:00:46 crc kubenswrapper[4792]: I0216 22:00:46.693671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerStarted","Data":"18e8014d4ddb956d964e2e83b2474327700daddc31a33089f7ce59f544fdb5f9"} Feb 16 22:00:46 crc kubenswrapper[4792]: I0216 22:00:46.738943 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:00:46 crc kubenswrapper[4792]: I0216 22:00:46.738982 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:00:46 crc kubenswrapper[4792]: I0216 22:00:46.936827 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 22:00:46 crc kubenswrapper[4792]: I0216 22:00:46.938029 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.008393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.239457 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.328455 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.328502 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.575788 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.646633 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.709873 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="dnsmasq-dns" containerID="cri-o://090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265" gracePeriod=10 Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.771191 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.822813 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 22:00:47 crc kubenswrapper[4792]: I0216 22:00:47.822813 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.051542 4792 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod87b2b11d-56fb-403e-bd50-28eee88aa2f5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod87b2b11d-56fb-403e-bd50-28eee88aa2f5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod87b2b11d_56fb_403e_bd50_28eee88aa2f5.slice" Feb 16 22:00:48 crc kubenswrapper[4792]: E0216 22:00:48.051592 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod87b2b11d-56fb-403e-bd50-28eee88aa2f5] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod87b2b11d-56fb-403e-bd50-28eee88aa2f5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod87b2b11d_56fb_403e_bd50_28eee88aa2f5.slice" pod="openstack/ceilometer-0" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.511103 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.613790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.614065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.614097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.614310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.614363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.614425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgmz9\" (UniqueName: \"kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9\") pod \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\" (UID: \"ba3359c5-ae19-444d-ba61-8ec59d678b3e\") " Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.642828 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9" (OuterVolumeSpecName: "kube-api-access-lgmz9") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "kube-api-access-lgmz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.718023 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgmz9\" (UniqueName: \"kubernetes.io/projected/ba3359c5-ae19-444d-ba61-8ec59d678b3e-kube-api-access-lgmz9\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.787949 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.805818 4792 generic.go:334] "Generic (PLEG): container finished" podID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerID="090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265" exitCode=0 Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.805921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.806422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" event={"ID":"ba3359c5-ae19-444d-ba61-8ec59d678b3e","Type":"ContainerDied","Data":"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265"} Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.806475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" event={"ID":"ba3359c5-ae19-444d-ba61-8ec59d678b3e","Type":"ContainerDied","Data":"772838a0d44b2d90c0626f1d09119dcf849c9a6866f4a347239af6561454b4f2"} Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.806497 4792 scope.go:117] "RemoveContainer" containerID="090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.806711 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-g6wl7" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.807563 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.820293 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.820335 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.838996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config" (OuterVolumeSpecName: "config") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.854082 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.879216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba3359c5-ae19-444d-ba61-8ec59d678b3e" (UID: "ba3359c5-ae19-444d-ba61-8ec59d678b3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.909654 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.922891 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.922957 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.922967 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3359c5-ae19-444d-ba61-8ec59d678b3e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.927015 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.946937 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:48 crc kubenswrapper[4792]: E0216 22:00:48.947764 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="init" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.947861 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="init" Feb 16 22:00:48 crc kubenswrapper[4792]: E0216 22:00:48.947942 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="dnsmasq-dns" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.948002 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="dnsmasq-dns" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.948293 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" containerName="dnsmasq-dns" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.950478 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.954581 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.954838 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 22:00:48 crc kubenswrapper[4792]: I0216 22:00:48.961416 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.127737 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzwq\" (UniqueName: \"kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.127851 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.127880 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.127947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.127994 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.128028 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.128058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.148018 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.157126 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-g6wl7"] Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.230228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.230391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231158 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzwq\" (UniqueName: \"kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231527 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.231825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.233134 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.236581 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.240753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.247095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.259801 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzwq\" (UniqueName: \"kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.279571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.280244 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.281178 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.452221 4792 scope.go:117] "RemoveContainer" containerID="34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.546554 4792 scope.go:117] "RemoveContainer" containerID="090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265" Feb 16 22:00:49 crc kubenswrapper[4792]: E0216 22:00:49.546946 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265\": container with ID starting with 090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265 not found: ID does not exist" containerID="090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.546985 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265"} err="failed to get container status \"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265\": rpc error: code = NotFound desc = could not find container \"090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265\": container with ID starting with 090c4c9972db42c3a41f212a79adb03fab24308fe9b192f3098a2b3415529265 not found: ID does not exist" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.547013 4792 scope.go:117] "RemoveContainer" containerID="34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587" Feb 16 22:00:49 crc kubenswrapper[4792]: E0216 22:00:49.547799 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587\": container with ID starting with 34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587 not found: ID does not exist" containerID="34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.547825 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587"} err="failed to get container status \"34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587\": rpc error: code = NotFound desc = could not find container \"34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587\": container with ID starting with 34ef8ccc6ebc6bb273b86fe18fa937fcd359dc40e51c2e6b0b665600f473a587 not found: ID does not exist" Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.826286 4792 generic.go:334] "Generic (PLEG): container finished" podID="2a1a09bd-f9f3-4fd9-89a8-c11010239591" containerID="d6a451fdd95bfe0e604149b6a8587432a2f6bf66af2f3c037b56076fb3a3343e" exitCode=0 Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.826341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgrtz" event={"ID":"2a1a09bd-f9f3-4fd9-89a8-c11010239591","Type":"ContainerDied","Data":"d6a451fdd95bfe0e604149b6a8587432a2f6bf66af2f3c037b56076fb3a3343e"} Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.843557 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-api" containerID="cri-o://6e41209f855a831fb7b9607b66ba69ac6846bc3abbbdd1299adf3a5f172ebf84" gracePeriod=30 Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.843666 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-evaluator" containerID="cri-o://037ce3866246dc2e74a66254f31bf22adb9eac08da90d692e4af04a0e5ff8c03" gracePeriod=30 Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.843663 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-notifier" containerID="cri-o://18e8014d4ddb956d964e2e83b2474327700daddc31a33089f7ce59f544fdb5f9" gracePeriod=30 Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.843659 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-listener" containerID="cri-o://a41e247e9cf56106fae7b42f887a08da85365e34eea5f1589b46ce1a0c57eb6d" gracePeriod=30 Feb 16 22:00:49 crc kubenswrapper[4792]: I0216 22:00:49.891307 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.4859029919999998 podStartE2EDuration="13.891283518s" podCreationTimestamp="2026-02-16 22:00:36 +0000 UTC" firstStartedPulling="2026-02-16 22:00:38.143675671 +0000 UTC m=+1370.796954562" lastFinishedPulling="2026-02-16 22:00:49.549056197 +0000 UTC m=+1382.202335088" observedRunningTime="2026-02-16 22:00:49.874017305 +0000 UTC m=+1382.527296196" watchObservedRunningTime="2026-02-16 22:00:49.891283518 +0000 UTC m=+1382.544562409" Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.048818 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b2b11d-56fb-403e-bd50-28eee88aa2f5" path="/var/lib/kubelet/pods/87b2b11d-56fb-403e-bd50-28eee88aa2f5/volumes" Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.053942 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3359c5-ae19-444d-ba61-8ec59d678b3e" path="/var/lib/kubelet/pods/ba3359c5-ae19-444d-ba61-8ec59d678b3e/volumes" Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.057233 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.853686 4792 generic.go:334] "Generic (PLEG): container finished" podID="a7aafdfa-5637-4a23-acd9-48d520e0d082" containerID="74aa7947b36e9def994c0f46e36b3dbbbf8d5597b5094129f133b66d1702aa79" exitCode=0 Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.853801 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" event={"ID":"a7aafdfa-5637-4a23-acd9-48d520e0d082","Type":"ContainerDied","Data":"74aa7947b36e9def994c0f46e36b3dbbbf8d5597b5094129f133b66d1702aa79"} Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.856821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerStarted","Data":"0841c2f2da3cb21170af22daed7a33f914e8e42380b26518a4a9960680d91dd8"} Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.856872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerStarted","Data":"ec2635ffd1a46c5c59e88fea2556ecd5961810efee1d5103136e3d1871d480bb"} Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.860359 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerID="037ce3866246dc2e74a66254f31bf22adb9eac08da90d692e4af04a0e5ff8c03" exitCode=0 Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.860389 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerID="6e41209f855a831fb7b9607b66ba69ac6846bc3abbbdd1299adf3a5f172ebf84" exitCode=0 Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.860523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerStarted","Data":"a41e247e9cf56106fae7b42f887a08da85365e34eea5f1589b46ce1a0c57eb6d"} Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.861691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerDied","Data":"037ce3866246dc2e74a66254f31bf22adb9eac08da90d692e4af04a0e5ff8c03"} Feb 16 22:00:50 crc kubenswrapper[4792]: I0216 22:00:50.861719 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerDied","Data":"6e41209f855a831fb7b9607b66ba69ac6846bc3abbbdd1299adf3a5f172ebf84"} Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.392224 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.429332 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8zt\" (UniqueName: \"kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt\") pod \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.429753 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data\") pod \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.429777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") pod \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.429805 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts\") pod \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.437197 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts" (OuterVolumeSpecName: "scripts") pod "2a1a09bd-f9f3-4fd9-89a8-c11010239591" (UID: "2a1a09bd-f9f3-4fd9-89a8-c11010239591"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.445580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt" (OuterVolumeSpecName: "kube-api-access-ls8zt") pod "2a1a09bd-f9f3-4fd9-89a8-c11010239591" (UID: "2a1a09bd-f9f3-4fd9-89a8-c11010239591"). InnerVolumeSpecName "kube-api-access-ls8zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:51 crc kubenswrapper[4792]: E0216 22:00:51.463996 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle podName:2a1a09bd-f9f3-4fd9-89a8-c11010239591 nodeName:}" failed. No retries permitted until 2026-02-16 22:00:51.963965722 +0000 UTC m=+1384.617244613 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle") pod "2a1a09bd-f9f3-4fd9-89a8-c11010239591" (UID: "2a1a09bd-f9f3-4fd9-89a8-c11010239591") : error deleting /var/lib/kubelet/pods/2a1a09bd-f9f3-4fd9-89a8-c11010239591/volume-subpaths: remove /var/lib/kubelet/pods/2a1a09bd-f9f3-4fd9-89a8-c11010239591/volume-subpaths: no such file or directory Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.467256 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data" (OuterVolumeSpecName: "config-data") pod "2a1a09bd-f9f3-4fd9-89a8-c11010239591" (UID: "2a1a09bd-f9f3-4fd9-89a8-c11010239591"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.532857 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8zt\" (UniqueName: \"kubernetes.io/projected/2a1a09bd-f9f3-4fd9-89a8-c11010239591-kube-api-access-ls8zt\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.532895 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.532907 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.871446 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerStarted","Data":"5b4e3d26546f4579929b2d5fdc7cf4dcefa6a5adf946afeb5f1c6959e9495926"} Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.872975 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgrtz" Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.872965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgrtz" event={"ID":"2a1a09bd-f9f3-4fd9-89a8-c11010239591","Type":"ContainerDied","Data":"9abd4f2e5246bd8402326c87b7a1e1ca48e22659c810f6128899b140f75261b4"} Feb 16 22:00:51 crc kubenswrapper[4792]: I0216 22:00:51.873044 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abd4f2e5246bd8402326c87b7a1e1ca48e22659c810f6128899b140f75261b4" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.044141 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") pod \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\" (UID: \"2a1a09bd-f9f3-4fd9-89a8-c11010239591\") " Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.086805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a1a09bd-f9f3-4fd9-89a8-c11010239591" (UID: "2a1a09bd-f9f3-4fd9-89a8-c11010239591"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.110229 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.110546 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-log" containerID="cri-o://ae2f7f6f5fc3d95a7a343de73fdcf66c0e1b71d1355510b301735a2ef55fe36a" gracePeriod=30 Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.112074 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-api" containerID="cri-o://6f5fafff1a752203d09edc97d579187d40fdcd0b9d6d0c87c811ad976f4cc1a2" gracePeriod=30 Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.149362 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1a09bd-f9f3-4fd9-89a8-c11010239591-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.187082 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.187331 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="67d0476b-27f8-4543-914d-fddf5b2960b7" containerName="nova-scheduler-scheduler" containerID="cri-o://54711b9aae472a7c9579ab3e7649d97a77586b561ddc0ca541ce64cda4c890a9" gracePeriod=30 Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.441005 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.556625 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data\") pod \"a7aafdfa-5637-4a23-acd9-48d520e0d082\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.556682 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle\") pod \"a7aafdfa-5637-4a23-acd9-48d520e0d082\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.556784 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tkn\" (UniqueName: \"kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn\") pod \"a7aafdfa-5637-4a23-acd9-48d520e0d082\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.556991 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts\") pod \"a7aafdfa-5637-4a23-acd9-48d520e0d082\" (UID: \"a7aafdfa-5637-4a23-acd9-48d520e0d082\") " Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.562963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn" (OuterVolumeSpecName: "kube-api-access-l5tkn") pod "a7aafdfa-5637-4a23-acd9-48d520e0d082" (UID: "a7aafdfa-5637-4a23-acd9-48d520e0d082"). InnerVolumeSpecName "kube-api-access-l5tkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.564876 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts" (OuterVolumeSpecName: "scripts") pod "a7aafdfa-5637-4a23-acd9-48d520e0d082" (UID: "a7aafdfa-5637-4a23-acd9-48d520e0d082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.596719 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data" (OuterVolumeSpecName: "config-data") pod "a7aafdfa-5637-4a23-acd9-48d520e0d082" (UID: "a7aafdfa-5637-4a23-acd9-48d520e0d082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.601087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7aafdfa-5637-4a23-acd9-48d520e0d082" (UID: "a7aafdfa-5637-4a23-acd9-48d520e0d082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.659873 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.659912 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.659928 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tkn\" (UniqueName: \"kubernetes.io/projected/a7aafdfa-5637-4a23-acd9-48d520e0d082-kube-api-access-l5tkn\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.659940 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aafdfa-5637-4a23-acd9-48d520e0d082-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.890862 4792 generic.go:334] "Generic (PLEG): container finished" podID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerID="ae2f7f6f5fc3d95a7a343de73fdcf66c0e1b71d1355510b301735a2ef55fe36a" exitCode=143 Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.891069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerDied","Data":"ae2f7f6f5fc3d95a7a343de73fdcf66c0e1b71d1355510b301735a2ef55fe36a"} Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.896200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" event={"ID":"a7aafdfa-5637-4a23-acd9-48d520e0d082","Type":"ContainerDied","Data":"b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd"} Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.896236 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b543803ba54bb00b80644449b71c86c826427d6788e2b112b4bd02c7c2a548fd" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.896297 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wjlz9" Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.903859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerStarted","Data":"fc347e4348bb0d11e8b36f98b21bb931bcc37f4b92d7dc6f987dfda108c16ca8"} Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.907433 4792 generic.go:334] "Generic (PLEG): container finished" podID="67d0476b-27f8-4543-914d-fddf5b2960b7" containerID="54711b9aae472a7c9579ab3e7649d97a77586b561ddc0ca541ce64cda4c890a9" exitCode=0 Feb 16 22:00:52 crc kubenswrapper[4792]: I0216 22:00:52.907476 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67d0476b-27f8-4543-914d-fddf5b2960b7","Type":"ContainerDied","Data":"54711b9aae472a7c9579ab3e7649d97a77586b561ddc0ca541ce64cda4c890a9"} Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.009943 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 22:00:53 crc kubenswrapper[4792]: E0216 22:00:53.010960 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aafdfa-5637-4a23-acd9-48d520e0d082" containerName="nova-cell1-conductor-db-sync" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.010994 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aafdfa-5637-4a23-acd9-48d520e0d082" containerName="nova-cell1-conductor-db-sync" Feb 16 22:00:53 crc kubenswrapper[4792]: E0216 22:00:53.011029 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1a09bd-f9f3-4fd9-89a8-c11010239591" containerName="nova-manage" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.011039 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1a09bd-f9f3-4fd9-89a8-c11010239591" containerName="nova-manage" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.011518 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1a09bd-f9f3-4fd9-89a8-c11010239591" containerName="nova-manage" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.011550 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aafdfa-5637-4a23-acd9-48d520e0d082" containerName="nova-cell1-conductor-db-sync" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.013086 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.018928 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.024115 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.076051 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.076214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.076273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmc2\" (UniqueName: \"kubernetes.io/projected/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-kube-api-access-wbmc2\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.177582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbmc2\" (UniqueName: \"kubernetes.io/projected/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-kube-api-access-wbmc2\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.177765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.177854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.182424 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.184530 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.208941 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbmc2\" (UniqueName: \"kubernetes.io/projected/1c02c9b2-0bc4-4417-8f78-e31791c9d8d6-kube-api-access-wbmc2\") pod \"nova-cell1-conductor-0\" (UID: \"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.293283 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.342561 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.384125 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvtc2\" (UniqueName: \"kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2\") pod \"67d0476b-27f8-4543-914d-fddf5b2960b7\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.384195 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data\") pod \"67d0476b-27f8-4543-914d-fddf5b2960b7\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.384283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle\") pod \"67d0476b-27f8-4543-914d-fddf5b2960b7\" (UID: \"67d0476b-27f8-4543-914d-fddf5b2960b7\") " Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.389060 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2" (OuterVolumeSpecName: "kube-api-access-gvtc2") pod "67d0476b-27f8-4543-914d-fddf5b2960b7" (UID: "67d0476b-27f8-4543-914d-fddf5b2960b7"). InnerVolumeSpecName "kube-api-access-gvtc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.440967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67d0476b-27f8-4543-914d-fddf5b2960b7" (UID: "67d0476b-27f8-4543-914d-fddf5b2960b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.451933 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data" (OuterVolumeSpecName: "config-data") pod "67d0476b-27f8-4543-914d-fddf5b2960b7" (UID: "67d0476b-27f8-4543-914d-fddf5b2960b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.487343 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvtc2\" (UniqueName: \"kubernetes.io/projected/67d0476b-27f8-4543-914d-fddf5b2960b7-kube-api-access-gvtc2\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.487586 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.487608 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0476b-27f8-4543-914d-fddf5b2960b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:53 crc kubenswrapper[4792]: W0216 22:00:53.840440 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c02c9b2_0bc4_4417_8f78_e31791c9d8d6.slice/crio-034f7396ebf9708148a3701de2b9da8f2af936c8c878024d1845e1c8bc55de9f WatchSource:0}: Error finding container 034f7396ebf9708148a3701de2b9da8f2af936c8c878024d1845e1c8bc55de9f: Status 404 returned error can't find the container with id 034f7396ebf9708148a3701de2b9da8f2af936c8c878024d1845e1c8bc55de9f Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.840490 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.924535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6","Type":"ContainerStarted","Data":"034f7396ebf9708148a3701de2b9da8f2af936c8c878024d1845e1c8bc55de9f"} Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.927445 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerStarted","Data":"1ab296b49a469f4f67f81284ebda7a4950a5b2db94751c191379c6101f646019"} Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.927637 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-central-agent" containerID="cri-o://0841c2f2da3cb21170af22daed7a33f914e8e42380b26518a4a9960680d91dd8" gracePeriod=30 Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.927764 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.928066 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="proxy-httpd" containerID="cri-o://1ab296b49a469f4f67f81284ebda7a4950a5b2db94751c191379c6101f646019" gracePeriod=30 Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.928191 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-notification-agent" containerID="cri-o://5b4e3d26546f4579929b2d5fdc7cf4dcefa6a5adf946afeb5f1c6959e9495926" gracePeriod=30 Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.928248 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="sg-core" containerID="cri-o://fc347e4348bb0d11e8b36f98b21bb931bcc37f4b92d7dc6f987dfda108c16ca8" gracePeriod=30 Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.931440 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67d0476b-27f8-4543-914d-fddf5b2960b7","Type":"ContainerDied","Data":"db2bddb845b9d26e14488b8d3b5e03268cca471d98d8335bb26068f2d1c8bc2e"} Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.931493 4792 scope.go:117] "RemoveContainer" containerID="54711b9aae472a7c9579ab3e7649d97a77586b561ddc0ca541ce64cda4c890a9" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.931715 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:53 crc kubenswrapper[4792]: I0216 22:00:53.962778 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.43941534 podStartE2EDuration="5.962761183s" podCreationTimestamp="2026-02-16 22:00:48 +0000 UTC" firstStartedPulling="2026-02-16 22:00:50.082222993 +0000 UTC m=+1382.735501884" lastFinishedPulling="2026-02-16 22:00:53.605568836 +0000 UTC m=+1386.258847727" observedRunningTime="2026-02-16 22:00:53.951832988 +0000 UTC m=+1386.605111899" watchObservedRunningTime="2026-02-16 22:00:53.962761183 +0000 UTC m=+1386.616040074" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.046008 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.070028 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.095113 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:54 crc kubenswrapper[4792]: E0216 22:00:54.095688 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d0476b-27f8-4543-914d-fddf5b2960b7" containerName="nova-scheduler-scheduler" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.095710 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d0476b-27f8-4543-914d-fddf5b2960b7" containerName="nova-scheduler-scheduler" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.095956 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d0476b-27f8-4543-914d-fddf5b2960b7" containerName="nova-scheduler-scheduler" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.096794 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.101589 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.107704 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.257802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.258118 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.258375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqtm\" (UniqueName: \"kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.361111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.361345 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.361468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqtm\" (UniqueName: \"kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.366796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.368067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.379159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqtm\" (UniqueName: \"kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm\") pod \"nova-scheduler-0\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.446636 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.943658 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c02c9b2-0bc4-4417-8f78-e31791c9d8d6","Type":"ContainerStarted","Data":"5c20de6dfb0633ea6d458e15af9272f9ec601d993c43ed006a3de9b78e9f3a57"} Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.944058 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.948145 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerID="fc347e4348bb0d11e8b36f98b21bb931bcc37f4b92d7dc6f987dfda108c16ca8" exitCode=2 Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.948177 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerID="5b4e3d26546f4579929b2d5fdc7cf4dcefa6a5adf946afeb5f1c6959e9495926" exitCode=0 Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.948200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerDied","Data":"fc347e4348bb0d11e8b36f98b21bb931bcc37f4b92d7dc6f987dfda108c16ca8"} Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.948228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerDied","Data":"5b4e3d26546f4579929b2d5fdc7cf4dcefa6a5adf946afeb5f1c6959e9495926"} Feb 16 22:00:54 crc kubenswrapper[4792]: I0216 22:00:54.963312 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.96329134 podStartE2EDuration="2.96329134s" podCreationTimestamp="2026-02-16 22:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:54.961314992 +0000 UTC m=+1387.614593883" watchObservedRunningTime="2026-02-16 22:00:54.96329134 +0000 UTC m=+1387.616570231" Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.077920 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.960508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05f62987-b755-4f2e-bbf9-8b8f09e81602","Type":"ContainerStarted","Data":"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5"} Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.961247 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05f62987-b755-4f2e-bbf9-8b8f09e81602","Type":"ContainerStarted","Data":"97586444c5506033059d17be4a3dc8a5cf4bafb35a3c4277325121989ec2ba0f"} Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.962896 4792 generic.go:334] "Generic (PLEG): container finished" podID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerID="6f5fafff1a752203d09edc97d579187d40fdcd0b9d6d0c87c811ad976f4cc1a2" exitCode=0 Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.963241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerDied","Data":"6f5fafff1a752203d09edc97d579187d40fdcd0b9d6d0c87c811ad976f4cc1a2"} Feb 16 22:00:55 crc kubenswrapper[4792]: I0216 22:00:55.985868 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.985847584 podStartE2EDuration="2.985847584s" podCreationTimestamp="2026-02-16 22:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:55.983620547 +0000 UTC m=+1388.636899438" watchObservedRunningTime="2026-02-16 22:00:55.985847584 +0000 UTC m=+1388.639126475" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.040287 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d0476b-27f8-4543-914d-fddf5b2960b7" path="/var/lib/kubelet/pods/67d0476b-27f8-4543-914d-fddf5b2960b7/volumes" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.056350 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.207633 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vpv\" (UniqueName: \"kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv\") pod \"e94578a2-d30b-4ad9-a739-57c49ba01116\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.209221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs\") pod \"e94578a2-d30b-4ad9-a739-57c49ba01116\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.209310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data\") pod \"e94578a2-d30b-4ad9-a739-57c49ba01116\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.209345 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle\") pod \"e94578a2-d30b-4ad9-a739-57c49ba01116\" (UID: \"e94578a2-d30b-4ad9-a739-57c49ba01116\") " Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.209929 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs" (OuterVolumeSpecName: "logs") pod "e94578a2-d30b-4ad9-a739-57c49ba01116" (UID: "e94578a2-d30b-4ad9-a739-57c49ba01116"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.218446 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e94578a2-d30b-4ad9-a739-57c49ba01116-logs\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.226725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv" (OuterVolumeSpecName: "kube-api-access-42vpv") pod "e94578a2-d30b-4ad9-a739-57c49ba01116" (UID: "e94578a2-d30b-4ad9-a739-57c49ba01116"). InnerVolumeSpecName "kube-api-access-42vpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.247279 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data" (OuterVolumeSpecName: "config-data") pod "e94578a2-d30b-4ad9-a739-57c49ba01116" (UID: "e94578a2-d30b-4ad9-a739-57c49ba01116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.248127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e94578a2-d30b-4ad9-a739-57c49ba01116" (UID: "e94578a2-d30b-4ad9-a739-57c49ba01116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.320356 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.320403 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94578a2-d30b-4ad9-a739-57c49ba01116-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.320420 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vpv\" (UniqueName: \"kubernetes.io/projected/e94578a2-d30b-4ad9-a739-57c49ba01116-kube-api-access-42vpv\") on node \"crc\" DevicePath \"\"" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.981478 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e94578a2-d30b-4ad9-a739-57c49ba01116","Type":"ContainerDied","Data":"b46e3ac4e26d164b29b1ef5bd93dc0007a620f3602b51e894dac6558c8b45ae9"} Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.981534 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:56 crc kubenswrapper[4792]: I0216 22:00:56.981546 4792 scope.go:117] "RemoveContainer" containerID="6f5fafff1a752203d09edc97d579187d40fdcd0b9d6d0c87c811ad976f4cc1a2" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.042536 4792 scope.go:117] "RemoveContainer" containerID="ae2f7f6f5fc3d95a7a343de73fdcf66c0e1b71d1355510b301735a2ef55fe36a" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.045957 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.058201 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.088769 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:57 crc kubenswrapper[4792]: E0216 22:00:57.089276 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-log" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.089293 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-log" Feb 16 22:00:57 crc kubenswrapper[4792]: E0216 22:00:57.089317 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-api" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.089326 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-api" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.110434 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-log" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.110722 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" containerName="nova-api-api" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.113760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.114004 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.123424 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.252729 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.253015 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.253458 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxpfg\" (UniqueName: \"kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.253742 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.356337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxpfg\" (UniqueName: \"kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.356458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.356561 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.356615 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.357183 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.365625 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.368644 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.385730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxpfg\" (UniqueName: \"kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg\") pod \"nova-api-0\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.460662 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.941621 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:00:57 crc kubenswrapper[4792]: I0216 22:00:57.994275 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerStarted","Data":"30dfe230ba3309cdd621c201bf04278e6f72ec39485f82e4ca972b9a7a38b855"} Feb 16 22:00:58 crc kubenswrapper[4792]: I0216 22:00:58.042373 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94578a2-d30b-4ad9-a739-57c49ba01116" path="/var/lib/kubelet/pods/e94578a2-d30b-4ad9-a739-57c49ba01116/volumes" Feb 16 22:00:59 crc kubenswrapper[4792]: I0216 22:00:59.007683 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerStarted","Data":"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae"} Feb 16 22:00:59 crc kubenswrapper[4792]: I0216 22:00:59.008068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerStarted","Data":"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733"} Feb 16 22:00:59 crc kubenswrapper[4792]: I0216 22:00:59.447854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.140200 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.140170751 podStartE2EDuration="3.140170751s" podCreationTimestamp="2026-02-16 22:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:59.029458188 +0000 UTC m=+1391.682737079" watchObservedRunningTime="2026-02-16 22:01:00.140170751 +0000 UTC m=+1392.793449642" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.147372 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521321-bn56l"] Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.148918 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.164677 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-bn56l"] Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.322833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.323221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.323259 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.323327 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gnb\" (UniqueName: \"kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.425996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.426349 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.426539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8gnb\" (UniqueName: \"kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.426786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.432224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.432938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.435926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.444983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8gnb\" (UniqueName: \"kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb\") pod \"keystone-cron-29521321-bn56l\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:00 crc kubenswrapper[4792]: I0216 22:01:00.477030 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:01 crc kubenswrapper[4792]: I0216 22:01:01.012702 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-bn56l"] Feb 16 22:01:01 crc kubenswrapper[4792]: I0216 22:01:01.041445 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-bn56l" event={"ID":"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd","Type":"ContainerStarted","Data":"54ee84abdef93d35e50b10c36a709eccd932a82dcc578c643b8db20759445046"} Feb 16 22:01:02 crc kubenswrapper[4792]: I0216 22:01:02.053312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-bn56l" event={"ID":"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd","Type":"ContainerStarted","Data":"ebaf8fa234de0c63e34deac496bcc394a0070df0fbf9ca8c20248b1aafa56275"} Feb 16 22:01:02 crc kubenswrapper[4792]: I0216 22:01:02.078227 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521321-bn56l" podStartSLOduration=2.078209965 podStartE2EDuration="2.078209965s" podCreationTimestamp="2026-02-16 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:02.067169666 +0000 UTC m=+1394.720448567" watchObservedRunningTime="2026-02-16 22:01:02.078209965 +0000 UTC m=+1394.731488856" Feb 16 22:01:03 crc kubenswrapper[4792]: I0216 22:01:03.068092 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerID="0841c2f2da3cb21170af22daed7a33f914e8e42380b26518a4a9960680d91dd8" exitCode=0 Feb 16 22:01:03 crc kubenswrapper[4792]: I0216 22:01:03.068207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerDied","Data":"0841c2f2da3cb21170af22daed7a33f914e8e42380b26518a4a9960680d91dd8"} Feb 16 22:01:03 crc kubenswrapper[4792]: I0216 22:01:03.378026 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 22:01:04 crc kubenswrapper[4792]: I0216 22:01:04.447930 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 22:01:04 crc kubenswrapper[4792]: I0216 22:01:04.482274 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 22:01:05 crc kubenswrapper[4792]: I0216 22:01:05.092226 4792 generic.go:334] "Generic (PLEG): container finished" podID="f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" containerID="ebaf8fa234de0c63e34deac496bcc394a0070df0fbf9ca8c20248b1aafa56275" exitCode=0 Feb 16 22:01:05 crc kubenswrapper[4792]: I0216 22:01:05.092316 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-bn56l" event={"ID":"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd","Type":"ContainerDied","Data":"ebaf8fa234de0c63e34deac496bcc394a0070df0fbf9ca8c20248b1aafa56275"} Feb 16 22:01:05 crc kubenswrapper[4792]: I0216 22:01:05.126462 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.503741 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.605474 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data\") pod \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.605964 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle\") pod \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.606011 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8gnb\" (UniqueName: \"kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb\") pod \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.606065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys\") pod \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\" (UID: \"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd\") " Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.611493 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" (UID: "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.625889 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb" (OuterVolumeSpecName: "kube-api-access-b8gnb") pod "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" (UID: "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd"). InnerVolumeSpecName "kube-api-access-b8gnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.641893 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" (UID: "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.674809 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data" (OuterVolumeSpecName: "config-data") pod "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" (UID: "f21375f1-ace7-4a32-aaa7-eb7752bc5ffd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.708518 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.708567 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8gnb\" (UniqueName: \"kubernetes.io/projected/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-kube-api-access-b8gnb\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.708579 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:06 crc kubenswrapper[4792]: I0216 22:01:06.708587 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21375f1-ace7-4a32-aaa7-eb7752bc5ffd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:07 crc kubenswrapper[4792]: I0216 22:01:07.118355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-bn56l" event={"ID":"f21375f1-ace7-4a32-aaa7-eb7752bc5ffd","Type":"ContainerDied","Data":"54ee84abdef93d35e50b10c36a709eccd932a82dcc578c643b8db20759445046"} Feb 16 22:01:07 crc kubenswrapper[4792]: I0216 22:01:07.118396 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-bn56l" Feb 16 22:01:07 crc kubenswrapper[4792]: I0216 22:01:07.118405 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ee84abdef93d35e50b10c36a709eccd932a82dcc578c643b8db20759445046" Feb 16 22:01:07 crc kubenswrapper[4792]: I0216 22:01:07.461180 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:07 crc kubenswrapper[4792]: I0216 22:01:07.461236 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:08 crc kubenswrapper[4792]: I0216 22:01:08.543776 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:08 crc kubenswrapper[4792]: I0216 22:01:08.543810 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.213669 4792 generic.go:334] "Generic (PLEG): container finished" podID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" containerID="2f0db2c562133693caa8111d5ac50ef75ec1c3c2b171fd2a807ee3a40d9ea9b0" exitCode=137 Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.213785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"46b0b16f-b82e-4daf-841e-6d8aa64e35e0","Type":"ContainerDied","Data":"2f0db2c562133693caa8111d5ac50ef75ec1c3c2b171fd2a807ee3a40d9ea9b0"} Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.214332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"46b0b16f-b82e-4daf-841e-6d8aa64e35e0","Type":"ContainerDied","Data":"e0827ed980a46616ca199b9bacb72ca886c19d22b342ce483c37299954a511c7"} Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.214354 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0827ed980a46616ca199b9bacb72ca886c19d22b342ce483c37299954a511c7" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.218022 4792 generic.go:334] "Generic (PLEG): container finished" podID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerID="e1a001bdd889500b588eb1b797e9317fff6a976c6834cb04dd76892a3f9e0f80" exitCode=137 Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.218072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerDied","Data":"e1a001bdd889500b588eb1b797e9317fff6a976c6834cb04dd76892a3f9e0f80"} Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.218101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cbaa069a-f9fc-46af-9a91-71a0f838c821","Type":"ContainerDied","Data":"c9dcc7ad5c0cc4ac92945d585fed2ef7a4be09edc9f81ccf6d10b383a02fc909"} Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.218116 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9dcc7ad5c0cc4ac92945d585fed2ef7a4be09edc9f81ccf6d10b383a02fc909" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.240677 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.243307 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.310681 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs\") pod \"cbaa069a-f9fc-46af-9a91-71a0f838c821\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.310788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle\") pod \"cbaa069a-f9fc-46af-9a91-71a0f838c821\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.310964 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data\") pod \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.311009 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data\") pod \"cbaa069a-f9fc-46af-9a91-71a0f838c821\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.311075 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jftx4\" (UniqueName: \"kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4\") pod \"cbaa069a-f9fc-46af-9a91-71a0f838c821\" (UID: \"cbaa069a-f9fc-46af-9a91-71a0f838c821\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.311134 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle\") pod \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.311185 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7kbq\" (UniqueName: \"kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq\") pod \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\" (UID: \"46b0b16f-b82e-4daf-841e-6d8aa64e35e0\") " Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.313154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs" (OuterVolumeSpecName: "logs") pod "cbaa069a-f9fc-46af-9a91-71a0f838c821" (UID: "cbaa069a-f9fc-46af-9a91-71a0f838c821"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.319079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq" (OuterVolumeSpecName: "kube-api-access-f7kbq") pod "46b0b16f-b82e-4daf-841e-6d8aa64e35e0" (UID: "46b0b16f-b82e-4daf-841e-6d8aa64e35e0"). InnerVolumeSpecName "kube-api-access-f7kbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.319228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4" (OuterVolumeSpecName: "kube-api-access-jftx4") pod "cbaa069a-f9fc-46af-9a91-71a0f838c821" (UID: "cbaa069a-f9fc-46af-9a91-71a0f838c821"). InnerVolumeSpecName "kube-api-access-jftx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.344825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data" (OuterVolumeSpecName: "config-data") pod "cbaa069a-f9fc-46af-9a91-71a0f838c821" (UID: "cbaa069a-f9fc-46af-9a91-71a0f838c821"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.347935 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46b0b16f-b82e-4daf-841e-6d8aa64e35e0" (UID: "46b0b16f-b82e-4daf-841e-6d8aa64e35e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.351805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data" (OuterVolumeSpecName: "config-data") pod "46b0b16f-b82e-4daf-841e-6d8aa64e35e0" (UID: "46b0b16f-b82e-4daf-841e-6d8aa64e35e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.352234 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbaa069a-f9fc-46af-9a91-71a0f838c821" (UID: "cbaa069a-f9fc-46af-9a91-71a0f838c821"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413673 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413702 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413711 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jftx4\" (UniqueName: \"kubernetes.io/projected/cbaa069a-f9fc-46af-9a91-71a0f838c821-kube-api-access-jftx4\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413719 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413729 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7kbq\" (UniqueName: \"kubernetes.io/projected/46b0b16f-b82e-4daf-841e-6d8aa64e35e0-kube-api-access-f7kbq\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413737 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbaa069a-f9fc-46af-9a91-71a0f838c821-logs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:15 crc kubenswrapper[4792]: I0216 22:01:15.413746 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbaa069a-f9fc-46af-9a91-71a0f838c821-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.231444 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.233127 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.280644 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.291383 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.330722 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: E0216 22:01:16.410925 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbaa069a_f9fc_46af_9a91_71a0f838c821.slice/crio-c9dcc7ad5c0cc4ac92945d585fed2ef7a4be09edc9f81ccf6d10b383a02fc909\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b0b16f_b82e_4daf_841e_6d8aa64e35e0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbaa069a_f9fc_46af_9a91_71a0f838c821.slice\": RecentStats: unable to find data in memory cache]" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.425729 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.467907 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: E0216 22:01:16.468436 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-log" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.468455 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-log" Feb 16 22:01:16 crc kubenswrapper[4792]: E0216 22:01:16.468475 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" containerName="keystone-cron" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.468482 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" containerName="keystone-cron" Feb 16 22:01:16 crc kubenswrapper[4792]: E0216 22:01:16.468711 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-metadata" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.468719 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-metadata" Feb 16 22:01:16 crc kubenswrapper[4792]: E0216 22:01:16.468735 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.468741 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.469010 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21375f1-ace7-4a32-aaa7-eb7752bc5ffd" containerName="keystone-cron" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.469037 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.469054 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-metadata" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.469069 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" containerName="nova-metadata-log" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.470278 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.472937 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.473330 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.479713 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.493944 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.500070 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.505071 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.505852 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.506004 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.507949 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.546834 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.546898 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.546918 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swfz4\" (UniqueName: \"kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547043 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547097 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2jr\" (UniqueName: \"kubernetes.io/projected/24be6f91-f4d4-44ae-9cf4-17690f27e4be-kube-api-access-qz2jr\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547339 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.547359 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swfz4\" (UniqueName: \"kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649680 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2jr\" (UniqueName: \"kubernetes.io/projected/24be6f91-f4d4-44ae-9cf4-17690f27e4be-kube-api-access-qz2jr\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649696 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.649715 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.650822 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.655297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.655410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.655587 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.656834 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.660074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24be6f91-f4d4-44ae-9cf4-17690f27e4be-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.664445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.667137 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2jr\" (UniqueName: \"kubernetes.io/projected/24be6f91-f4d4-44ae-9cf4-17690f27e4be-kube-api-access-qz2jr\") pod \"nova-cell1-novncproxy-0\" (UID: \"24be6f91-f4d4-44ae-9cf4-17690f27e4be\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.667661 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.672481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swfz4\" (UniqueName: \"kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4\") pod \"nova-metadata-0\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.788275 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:16 crc kubenswrapper[4792]: I0216 22:01:16.837781 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.256129 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:17 crc kubenswrapper[4792]: W0216 22:01:17.258325 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf70292f_dd26_4ae5_b66f_6f2cc7473ef7.slice/crio-4464280adcbc7837b5176e7ff4e010ad3069b2d9b2484f6d55fe7c13cfc320ac WatchSource:0}: Error finding container 4464280adcbc7837b5176e7ff4e010ad3069b2d9b2484f6d55fe7c13cfc320ac: Status 404 returned error can't find the container with id 4464280adcbc7837b5176e7ff4e010ad3069b2d9b2484f6d55fe7c13cfc320ac Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.385198 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 22:01:17 crc kubenswrapper[4792]: W0216 22:01:17.385574 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24be6f91_f4d4_44ae_9cf4_17690f27e4be.slice/crio-3aa41d162ebc15e01a88b86f66c98a9d4c1e05092507de80edd0d4bb7ee4abb5 WatchSource:0}: Error finding container 3aa41d162ebc15e01a88b86f66c98a9d4c1e05092507de80edd0d4bb7ee4abb5: Status 404 returned error can't find the container with id 3aa41d162ebc15e01a88b86f66c98a9d4c1e05092507de80edd0d4bb7ee4abb5 Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.464902 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.466839 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.466888 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 22:01:17 crc kubenswrapper[4792]: I0216 22:01:17.468568 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.042029 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b0b16f-b82e-4daf-841e-6d8aa64e35e0" path="/var/lib/kubelet/pods/46b0b16f-b82e-4daf-841e-6d8aa64e35e0/volumes" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.042996 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbaa069a-f9fc-46af-9a91-71a0f838c821" path="/var/lib/kubelet/pods/cbaa069a-f9fc-46af-9a91-71a0f838c821/volumes" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.267021 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24be6f91-f4d4-44ae-9cf4-17690f27e4be","Type":"ContainerStarted","Data":"c39a7bc8bab7eb13b3dc231bce3719a277d0928a17c66e84d1d3e31b0054b28f"} Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.267089 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24be6f91-f4d4-44ae-9cf4-17690f27e4be","Type":"ContainerStarted","Data":"3aa41d162ebc15e01a88b86f66c98a9d4c1e05092507de80edd0d4bb7ee4abb5"} Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.276191 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerStarted","Data":"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5"} Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.276244 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerStarted","Data":"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052"} Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.276253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerStarted","Data":"4464280adcbc7837b5176e7ff4e010ad3069b2d9b2484f6d55fe7c13cfc320ac"} Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.276508 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.279997 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.300681 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.300655746 podStartE2EDuration="2.300655746s" podCreationTimestamp="2026-02-16 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:18.294087131 +0000 UTC m=+1410.947366022" watchObservedRunningTime="2026-02-16 22:01:18.300655746 +0000 UTC m=+1410.953934647" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.346222 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.34619823 podStartE2EDuration="2.34619823s" podCreationTimestamp="2026-02-16 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:18.331171893 +0000 UTC m=+1410.984450804" watchObservedRunningTime="2026-02-16 22:01:18.34619823 +0000 UTC m=+1410.999477121" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.482504 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.484585 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.496197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.496513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.496653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.496752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.496866 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8xh\" (UniqueName: \"kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.497031 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.516988 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.599572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.599811 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.599929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.600142 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.600233 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8xh\" (UniqueName: \"kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.600331 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.600728 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.601005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.601359 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.601426 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.601880 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.622545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8xh\" (UniqueName: \"kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh\") pod \"dnsmasq-dns-6d99f6bc7f-hbdhp\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:18 crc kubenswrapper[4792]: I0216 22:01:18.817290 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:19 crc kubenswrapper[4792]: I0216 22:01:19.349866 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 22:01:19 crc kubenswrapper[4792]: I0216 22:01:19.420694 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.312678 4792 generic.go:334] "Generic (PLEG): container finished" podID="161250cf-19fe-49b8-bb81-4946c8b56056" containerID="9ae0bcdb1fbbd37d79bd6430b2f8d3ca0ae50523bcdbfdbbe8ea0e7e2bb8f63d" exitCode=0 Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.313659 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" event={"ID":"161250cf-19fe-49b8-bb81-4946c8b56056","Type":"ContainerDied","Data":"9ae0bcdb1fbbd37d79bd6430b2f8d3ca0ae50523bcdbfdbbe8ea0e7e2bb8f63d"} Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.313710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" event={"ID":"161250cf-19fe-49b8-bb81-4946c8b56056","Type":"ContainerStarted","Data":"97f99d9081f1498ecbbb884347e611fd74db69bfeaf508cab2b356d882875e84"} Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.359178 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerID="a41e247e9cf56106fae7b42f887a08da85365e34eea5f1589b46ce1a0c57eb6d" exitCode=137 Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.359204 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerID="18e8014d4ddb956d964e2e83b2474327700daddc31a33089f7ce59f544fdb5f9" exitCode=137 Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.359507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerDied","Data":"a41e247e9cf56106fae7b42f887a08da85365e34eea5f1589b46ce1a0c57eb6d"} Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.359530 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerDied","Data":"18e8014d4ddb956d964e2e83b2474327700daddc31a33089f7ce59f544fdb5f9"} Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.540951 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.676790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjpjt\" (UniqueName: \"kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt\") pod \"0a67b810-5101-414f-a0ed-a90a5ffc30af\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.676920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle\") pod \"0a67b810-5101-414f-a0ed-a90a5ffc30af\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.677025 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts\") pod \"0a67b810-5101-414f-a0ed-a90a5ffc30af\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.677191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data\") pod \"0a67b810-5101-414f-a0ed-a90a5ffc30af\" (UID: \"0a67b810-5101-414f-a0ed-a90a5ffc30af\") " Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.687701 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts" (OuterVolumeSpecName: "scripts") pod "0a67b810-5101-414f-a0ed-a90a5ffc30af" (UID: "0a67b810-5101-414f-a0ed-a90a5ffc30af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.690209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt" (OuterVolumeSpecName: "kube-api-access-qjpjt") pod "0a67b810-5101-414f-a0ed-a90a5ffc30af" (UID: "0a67b810-5101-414f-a0ed-a90a5ffc30af"). InnerVolumeSpecName "kube-api-access-qjpjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.779525 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjpjt\" (UniqueName: \"kubernetes.io/projected/0a67b810-5101-414f-a0ed-a90a5ffc30af-kube-api-access-qjpjt\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.779855 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.846647 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data" (OuterVolumeSpecName: "config-data") pod "0a67b810-5101-414f-a0ed-a90a5ffc30af" (UID: "0a67b810-5101-414f-a0ed-a90a5ffc30af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.871857 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a67b810-5101-414f-a0ed-a90a5ffc30af" (UID: "0a67b810-5101-414f-a0ed-a90a5ffc30af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.881996 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:20 crc kubenswrapper[4792]: I0216 22:01:20.882030 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a67b810-5101-414f-a0ed-a90a5ffc30af-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.044745 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.394676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" event={"ID":"161250cf-19fe-49b8-bb81-4946c8b56056","Type":"ContainerStarted","Data":"929df6369f20e845c3e9fc24590d951318fae1da90013e2d73ce93f9eaa6f02d"} Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.400268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0a67b810-5101-414f-a0ed-a90a5ffc30af","Type":"ContainerDied","Data":"0edd57667e534f5b0d4dffce58f016832bb73696323a28b2c645bdddbdec7e4b"} Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.400347 4792 scope.go:117] "RemoveContainer" containerID="a41e247e9cf56106fae7b42f887a08da85365e34eea5f1589b46ce1a0c57eb6d" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.400529 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.401582 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-api" containerID="cri-o://6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae" gracePeriod=30 Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.417887 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-log" containerID="cri-o://9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733" gracePeriod=30 Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.429833 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" podStartSLOduration=3.429809122 podStartE2EDuration="3.429809122s" podCreationTimestamp="2026-02-16 22:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:21.41694503 +0000 UTC m=+1414.070223921" watchObservedRunningTime="2026-02-16 22:01:21.429809122 +0000 UTC m=+1414.083088013" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.465238 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.465393 4792 scope.go:117] "RemoveContainer" containerID="18e8014d4ddb956d964e2e83b2474327700daddc31a33089f7ce59f544fdb5f9" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.478843 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.511773 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 22:01:21 crc kubenswrapper[4792]: E0216 22:01:21.513030 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-api" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513061 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-api" Feb 16 22:01:21 crc kubenswrapper[4792]: E0216 22:01:21.513148 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-notifier" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513161 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-notifier" Feb 16 22:01:21 crc kubenswrapper[4792]: E0216 22:01:21.513181 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-listener" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513189 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-listener" Feb 16 22:01:21 crc kubenswrapper[4792]: E0216 22:01:21.513232 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-evaluator" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513264 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-evaluator" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513899 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-notifier" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513937 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-api" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513969 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-evaluator" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.513992 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" containerName="aodh-listener" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.521881 4792 scope.go:117] "RemoveContainer" containerID="037ce3866246dc2e74a66254f31bf22adb9eac08da90d692e4af04a0e5ff8c03" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.526175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.536212 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.539284 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.539586 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.543490 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.543894 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9gfcj" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.546460 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.643759 4792 scope.go:117] "RemoveContainer" containerID="6e41209f855a831fb7b9607b66ba69ac6846bc3abbbdd1299adf3a5f172ebf84" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-public-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704350 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-config-data\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-internal-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-scripts\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.704528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9qf\" (UniqueName: \"kubernetes.io/projected/7d172284-1441-400a-bbf6-ba8574621533-kube-api-access-5w9qf\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.788894 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.788950 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.806836 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-public-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.806940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-config-data\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.807035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-internal-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.807084 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-scripts\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.807289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9qf\" (UniqueName: \"kubernetes.io/projected/7d172284-1441-400a-bbf6-ba8574621533-kube-api-access-5w9qf\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.807493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.830975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-public-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.831638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-scripts\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.836165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-internal-tls-certs\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.837718 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-config-data\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.839691 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.844224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d172284-1441-400a-bbf6-ba8574621533-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.866270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9qf\" (UniqueName: \"kubernetes.io/projected/7d172284-1441-400a-bbf6-ba8574621533-kube-api-access-5w9qf\") pod \"aodh-0\" (UID: \"7d172284-1441-400a-bbf6-ba8574621533\") " pod="openstack/aodh-0" Feb 16 22:01:21 crc kubenswrapper[4792]: I0216 22:01:21.927541 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 22:01:22 crc kubenswrapper[4792]: I0216 22:01:22.046019 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a67b810-5101-414f-a0ed-a90a5ffc30af" path="/var/lib/kubelet/pods/0a67b810-5101-414f-a0ed-a90a5ffc30af/volumes" Feb 16 22:01:22 crc kubenswrapper[4792]: I0216 22:01:22.426540 4792 generic.go:334] "Generic (PLEG): container finished" podID="97856010-8f38-413e-b0dd-11c355f16bf5" containerID="9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733" exitCode=143 Feb 16 22:01:22 crc kubenswrapper[4792]: I0216 22:01:22.426682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerDied","Data":"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733"} Feb 16 22:01:22 crc kubenswrapper[4792]: I0216 22:01:22.426783 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:22 crc kubenswrapper[4792]: W0216 22:01:22.524779 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d172284_1441_400a_bbf6_ba8574621533.slice/crio-74ba87227849fbec48f48fa5d1043b36209df84c0143b248da9b0fe3311fc7c7 WatchSource:0}: Error finding container 74ba87227849fbec48f48fa5d1043b36209df84c0143b248da9b0fe3311fc7c7: Status 404 returned error can't find the container with id 74ba87227849fbec48f48fa5d1043b36209df84c0143b248da9b0fe3311fc7c7 Feb 16 22:01:22 crc kubenswrapper[4792]: I0216 22:01:22.530748 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 22:01:23 crc kubenswrapper[4792]: I0216 22:01:23.440379 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d172284-1441-400a-bbf6-ba8574621533","Type":"ContainerStarted","Data":"6f58e563ed13a15081905d6a9b0944f5065def6f377f1d5da14b901caa2c1175"} Feb 16 22:01:23 crc kubenswrapper[4792]: I0216 22:01:23.440987 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d172284-1441-400a-bbf6-ba8574621533","Type":"ContainerStarted","Data":"74ba87227849fbec48f48fa5d1043b36209df84c0143b248da9b0fe3311fc7c7"} Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.456553 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerID="1ab296b49a469f4f67f81284ebda7a4950a5b2db94751c191379c6101f646019" exitCode=137 Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.456699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerDied","Data":"1ab296b49a469f4f67f81284ebda7a4950a5b2db94751c191379c6101f646019"} Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.457169 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec48ea94-a647-4b94-96cc-fc3a974c74bd","Type":"ContainerDied","Data":"ec2635ffd1a46c5c59e88fea2556ecd5961810efee1d5103136e3d1871d480bb"} Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.457188 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2635ffd1a46c5c59e88fea2556ecd5961810efee1d5103136e3d1871d480bb" Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.459287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d172284-1441-400a-bbf6-ba8574621533","Type":"ContainerStarted","Data":"c06e666399d19ea85eb9590e59094f5c0b8ae9aa7f1b44c163d7cc04564f25c7"} Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.579945 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzwq\" (UniqueName: \"kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679314 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679357 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679449 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.679484 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle\") pod \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\" (UID: \"ec48ea94-a647-4b94-96cc-fc3a974c74bd\") " Feb 16 22:01:24 crc kubenswrapper[4792]: I0216 22:01:24.687980 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.252245 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.253450 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.256970 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts" (OuterVolumeSpecName: "scripts") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.257782 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq" (OuterVolumeSpecName: "kube-api-access-vkzwq") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "kube-api-access-vkzwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.264741 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.356259 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.356302 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzwq\" (UniqueName: \"kubernetes.io/projected/ec48ea94-a647-4b94-96cc-fc3a974c74bd-kube-api-access-vkzwq\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.356312 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec48ea94-a647-4b94-96cc-fc3a974c74bd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.356321 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.464572 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.485656 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.486742 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data" (OuterVolumeSpecName: "config-data") pod "ec48ea94-a647-4b94-96cc-fc3a974c74bd" (UID: "ec48ea94-a647-4b94-96cc-fc3a974c74bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.486806 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d172284-1441-400a-bbf6-ba8574621533","Type":"ContainerStarted","Data":"5fbb934791022ebe42957af87d2c0e9d19c4a3e791d54057e5cd7c8fcb12fa41"} Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.566228 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.566262 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec48ea94-a647-4b94-96cc-fc3a974c74bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.899457 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.912683 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.943956 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:01:25 crc kubenswrapper[4792]: E0216 22:01:25.944429 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="sg-core" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.944444 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="sg-core" Feb 16 22:01:25 crc kubenswrapper[4792]: E0216 22:01:25.944452 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="proxy-httpd" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.944458 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="proxy-httpd" Feb 16 22:01:25 crc kubenswrapper[4792]: E0216 22:01:25.944478 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-central-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.944483 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-central-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: E0216 22:01:25.944508 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-notification-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.944514 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-notification-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.945743 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="sg-core" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.945758 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-notification-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.945779 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="ceilometer-central-agent" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.945792 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" containerName="proxy-httpd" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.957884 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.964378 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.964787 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 22:01:25 crc kubenswrapper[4792]: I0216 22:01:25.979438 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.043347 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec48ea94-a647-4b94-96cc-fc3a974c74bd" path="/var/lib/kubelet/pods/ec48ea94-a647-4b94-96cc-fc3a974c74bd/volumes" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094721 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqrd\" (UniqueName: \"kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.094835 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.165058 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.196865 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.196992 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197056 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqrd\" (UniqueName: \"kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197218 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197394 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.197685 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.201505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.203409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.203683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.206291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.225927 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqrd\" (UniqueName: \"kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd\") pod \"ceilometer-0\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.298758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data\") pod \"97856010-8f38-413e-b0dd-11c355f16bf5\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.299090 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle\") pod \"97856010-8f38-413e-b0dd-11c355f16bf5\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.299370 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxpfg\" (UniqueName: \"kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg\") pod \"97856010-8f38-413e-b0dd-11c355f16bf5\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.299409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs\") pod \"97856010-8f38-413e-b0dd-11c355f16bf5\" (UID: \"97856010-8f38-413e-b0dd-11c355f16bf5\") " Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.300722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs" (OuterVolumeSpecName: "logs") pod "97856010-8f38-413e-b0dd-11c355f16bf5" (UID: "97856010-8f38-413e-b0dd-11c355f16bf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.301396 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.310872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg" (OuterVolumeSpecName: "kube-api-access-xxpfg") pod "97856010-8f38-413e-b0dd-11c355f16bf5" (UID: "97856010-8f38-413e-b0dd-11c355f16bf5"). InnerVolumeSpecName "kube-api-access-xxpfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.368687 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97856010-8f38-413e-b0dd-11c355f16bf5" (UID: "97856010-8f38-413e-b0dd-11c355f16bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.385936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data" (OuterVolumeSpecName: "config-data") pod "97856010-8f38-413e-b0dd-11c355f16bf5" (UID: "97856010-8f38-413e-b0dd-11c355f16bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.402754 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxpfg\" (UniqueName: \"kubernetes.io/projected/97856010-8f38-413e-b0dd-11c355f16bf5-kube-api-access-xxpfg\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.402784 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97856010-8f38-413e-b0dd-11c355f16bf5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.402794 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.402803 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97856010-8f38-413e-b0dd-11c355f16bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.541180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7d172284-1441-400a-bbf6-ba8574621533","Type":"ContainerStarted","Data":"07259157a4089536841f7a8ed11925edb21a89e579feb4ea7aeda2b3ba521635"} Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.554096 4792 generic.go:334] "Generic (PLEG): container finished" podID="97856010-8f38-413e-b0dd-11c355f16bf5" containerID="6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae" exitCode=0 Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.554144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerDied","Data":"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae"} Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.554174 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"97856010-8f38-413e-b0dd-11c355f16bf5","Type":"ContainerDied","Data":"30dfe230ba3309cdd621c201bf04278e6f72ec39485f82e4ca972b9a7a38b855"} Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.554195 4792 scope.go:117] "RemoveContainer" containerID="6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.554347 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.564521 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.167244333 podStartE2EDuration="5.564503358s" podCreationTimestamp="2026-02-16 22:01:21 +0000 UTC" firstStartedPulling="2026-02-16 22:01:22.532538878 +0000 UTC m=+1415.185817769" lastFinishedPulling="2026-02-16 22:01:25.929797903 +0000 UTC m=+1418.583076794" observedRunningTime="2026-02-16 22:01:26.562518139 +0000 UTC m=+1419.215797050" watchObservedRunningTime="2026-02-16 22:01:26.564503358 +0000 UTC m=+1419.217782249" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.616826 4792 scope.go:117] "RemoveContainer" containerID="9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.644137 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.647889 4792 scope.go:117] "RemoveContainer" containerID="6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae" Feb 16 22:01:26 crc kubenswrapper[4792]: E0216 22:01:26.648328 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae\": container with ID starting with 6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae not found: ID does not exist" containerID="6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.648358 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae"} err="failed to get container status \"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae\": rpc error: code = NotFound desc = could not find container \"6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae\": container with ID starting with 6919ac9965a28481cae05bbedcb77164b15ce42dd53efa9f6b1df93502b7a9ae not found: ID does not exist" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.648381 4792 scope.go:117] "RemoveContainer" containerID="9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733" Feb 16 22:01:26 crc kubenswrapper[4792]: E0216 22:01:26.648711 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733\": container with ID starting with 9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733 not found: ID does not exist" containerID="9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.648728 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733"} err="failed to get container status \"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733\": rpc error: code = NotFound desc = could not find container \"9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733\": container with ID starting with 9d085aa56924202726b1c49051e082f21e83b84caa430b74de4dda5366d8a733 not found: ID does not exist" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.658553 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.675817 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: E0216 22:01:26.676810 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-log" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.676826 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-log" Feb 16 22:01:26 crc kubenswrapper[4792]: E0216 22:01:26.676861 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-api" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.676869 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-api" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.679257 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-log" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.679301 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" containerName="nova-api-api" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.681139 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.685515 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.685585 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.685779 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.705513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.789511 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.790838 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.813992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.814052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.814085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.814200 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.814255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdb6c\" (UniqueName: \"kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.814306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.839205 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.852913 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:01:26 crc kubenswrapper[4792]: W0216 22:01:26.856659 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod011957d1_61c8_444f_a365_4382969bbd58.slice/crio-e38dc71c27d57d215db778cf9cb7dfcc8bb7585d1d679c2b459a12b6590e1c98 WatchSource:0}: Error finding container e38dc71c27d57d215db778cf9cb7dfcc8bb7585d1d679c2b459a12b6590e1c98: Status 404 returned error can't find the container with id e38dc71c27d57d215db778cf9cb7dfcc8bb7585d1d679c2b459a12b6590e1c98 Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.870175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.916537 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.916700 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.916943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.916995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.917175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.917307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdb6c\" (UniqueName: \"kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.917366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.925161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.926088 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.933321 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.933622 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:26 crc kubenswrapper[4792]: I0216 22:01:26.936281 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdb6c\" (UniqueName: \"kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c\") pod \"nova-api-0\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " pod="openstack/nova-api-0" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.006368 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.517992 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:27 crc kubenswrapper[4792]: W0216 22:01:27.521965 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda453f426_c933_4a61_bb74_096f6171f7de.slice/crio-a6b2bbe7bc268aaae7c1dc770e490360e59207b1ff77b28ad086acefe4ff2409 WatchSource:0}: Error finding container a6b2bbe7bc268aaae7c1dc770e490360e59207b1ff77b28ad086acefe4ff2409: Status 404 returned error can't find the container with id a6b2bbe7bc268aaae7c1dc770e490360e59207b1ff77b28ad086acefe4ff2409 Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.567548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerStarted","Data":"e38dc71c27d57d215db778cf9cb7dfcc8bb7585d1d679c2b459a12b6590e1c98"} Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.568393 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerStarted","Data":"a6b2bbe7bc268aaae7c1dc770e490360e59207b1ff77b28ad086acefe4ff2409"} Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.589648 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.813215 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.813298 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.955032 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q27pr"] Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.957894 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.988496 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 22:01:27 crc kubenswrapper[4792]: I0216 22:01:27.988943 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.045393 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfs8k\" (UniqueName: \"kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.045545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.045638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.086503 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.143923 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97856010-8f38-413e-b0dd-11c355f16bf5" path="/var/lib/kubelet/pods/97856010-8f38-413e-b0dd-11c355f16bf5/volumes" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.144735 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27pr"] Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.191443 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.191793 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.191942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.203164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfs8k\" (UniqueName: \"kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.250511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.261165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.270774 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.292763 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfs8k\" (UniqueName: \"kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k\") pod \"nova-cell1-cell-mapping-q27pr\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.428687 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.616203 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerStarted","Data":"f13e3649579c967fdd513637498e965ec8a6b2456e69d4ddbfc74338524f2ab6"} Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.629892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerStarted","Data":"d8bc4c1309a043b098301ff393835f93cb3f5a778eda3b7f7931f203a6376090"} Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.819734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.946529 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:01:28 crc kubenswrapper[4792]: I0216 22:01:28.947116 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="dnsmasq-dns" containerID="cri-o://39734c6f8f55659d7c9cb021060fa9f4fe423fa05fcaf185cd1b4ebf0ecfb6af" gracePeriod=10 Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.253037 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27pr"] Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.656380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerStarted","Data":"f3bdbe05344c97a2f3174ba5363690898c8acf77b7e31a54a2bf96b1b80ba86d"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.656968 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerStarted","Data":"04a28427d4bfaafbc3e454c69d8cc18eb9e0841c5ee0c454fb0a4103808bfcf0"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.670765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27pr" event={"ID":"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8","Type":"ContainerStarted","Data":"50898a4098d7562b2ec8429f06197a8a64523f0b11cf5a58c14c86cf7254f9df"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.670806 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27pr" event={"ID":"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8","Type":"ContainerStarted","Data":"cf2ccca53f993227aeadbcd60c28efcae8ef2d7a8bc2060b5c7567d9b3e18cab"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.677489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerStarted","Data":"8d8daef035f53feeb68a3b036bf0b174214b18f91caa3d18f1983d16d0b0e111"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.687665 4792 generic.go:334] "Generic (PLEG): container finished" podID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerID="39734c6f8f55659d7c9cb021060fa9f4fe423fa05fcaf185cd1b4ebf0ecfb6af" exitCode=0 Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.687707 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" event={"ID":"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1","Type":"ContainerDied","Data":"39734c6f8f55659d7c9cb021060fa9f4fe423fa05fcaf185cd1b4ebf0ecfb6af"} Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.688940 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.689948 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q27pr" podStartSLOduration=2.689932554 podStartE2EDuration="2.689932554s" podCreationTimestamp="2026-02-16 22:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:29.686550333 +0000 UTC m=+1422.339829214" watchObservedRunningTime="2026-02-16 22:01:29.689932554 +0000 UTC m=+1422.343211445" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.753501 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.7534792120000002 podStartE2EDuration="3.753479212s" podCreationTimestamp="2026-02-16 22:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:29.722149441 +0000 UTC m=+1422.375428332" watchObservedRunningTime="2026-02-16 22:01:29.753479212 +0000 UTC m=+1422.406758103" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.755885 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9vq\" (UniqueName: \"kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.756081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.756139 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.756166 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.756213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.756230 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0\") pod \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\" (UID: \"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1\") " Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.771606 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq" (OuterVolumeSpecName: "kube-api-access-rc9vq") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "kube-api-access-rc9vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.841356 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.843752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.851092 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.859036 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc9vq\" (UniqueName: \"kubernetes.io/projected/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-kube-api-access-rc9vq\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.859311 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.859405 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.859479 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.877773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.891115 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config" (OuterVolumeSpecName: "config") pod "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" (UID: "a2e364d5-ecbf-44f2-872c-89ce9a2a35d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.963094 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:29 crc kubenswrapper[4792]: I0216 22:01:29.963130 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:30 crc kubenswrapper[4792]: E0216 22:01:30.111883 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e364d5_ecbf_44f2_872c_89ce9a2a35d1.slice/crio-65f54c465784dfbfc959f95d255d8ba251f3741801b15c5e9ab5df7f6f3f45d8\": RecentStats: unable to find data in memory cache]" Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.702403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" event={"ID":"a2e364d5-ecbf-44f2-872c-89ce9a2a35d1","Type":"ContainerDied","Data":"65f54c465784dfbfc959f95d255d8ba251f3741801b15c5e9ab5df7f6f3f45d8"} Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.702777 4792 scope.go:117] "RemoveContainer" containerID="39734c6f8f55659d7c9cb021060fa9f4fe423fa05fcaf185cd1b4ebf0ecfb6af" Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.703108 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-dfq4t" Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.743249 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.743684 4792 scope.go:117] "RemoveContainer" containerID="af64f879f76a980a9f21779c8ffdd63dcdcd715bd132c592285fea39843a1a0e" Feb 16 22:01:30 crc kubenswrapper[4792]: I0216 22:01:30.763395 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-dfq4t"] Feb 16 22:01:31 crc kubenswrapper[4792]: I0216 22:01:31.715035 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerStarted","Data":"9fa9789850138c8517dfcedfe1757bffb1b0cb1dc2da7372da448279b0c15f2a"} Feb 16 22:01:31 crc kubenswrapper[4792]: I0216 22:01:31.715641 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 22:01:31 crc kubenswrapper[4792]: I0216 22:01:31.745578 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.795507857 podStartE2EDuration="6.745557162s" podCreationTimestamp="2026-02-16 22:01:25 +0000 UTC" firstStartedPulling="2026-02-16 22:01:26.859315371 +0000 UTC m=+1419.512594252" lastFinishedPulling="2026-02-16 22:01:30.809364666 +0000 UTC m=+1423.462643557" observedRunningTime="2026-02-16 22:01:31.733944886 +0000 UTC m=+1424.387223777" watchObservedRunningTime="2026-02-16 22:01:31.745557162 +0000 UTC m=+1424.398836063" Feb 16 22:01:32 crc kubenswrapper[4792]: I0216 22:01:32.040367 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" path="/var/lib/kubelet/pods/a2e364d5-ecbf-44f2-872c-89ce9a2a35d1/volumes" Feb 16 22:01:35 crc kubenswrapper[4792]: I0216 22:01:35.782280 4792 generic.go:334] "Generic (PLEG): container finished" podID="679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" containerID="50898a4098d7562b2ec8429f06197a8a64523f0b11cf5a58c14c86cf7254f9df" exitCode=0 Feb 16 22:01:35 crc kubenswrapper[4792]: I0216 22:01:35.782836 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27pr" event={"ID":"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8","Type":"ContainerDied","Data":"50898a4098d7562b2ec8429f06197a8a64523f0b11cf5a58c14c86cf7254f9df"} Feb 16 22:01:36 crc kubenswrapper[4792]: I0216 22:01:36.797450 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 22:01:36 crc kubenswrapper[4792]: I0216 22:01:36.802862 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 22:01:36 crc kubenswrapper[4792]: I0216 22:01:36.807529 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.006909 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.006975 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.330321 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.394977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data\") pod \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.395069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfs8k\" (UniqueName: \"kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k\") pod \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.395179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle\") pod \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.395376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts\") pod \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\" (UID: \"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8\") " Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.400513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k" (OuterVolumeSpecName: "kube-api-access-xfs8k") pod "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" (UID: "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8"). InnerVolumeSpecName "kube-api-access-xfs8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.401689 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts" (OuterVolumeSpecName: "scripts") pod "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" (UID: "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.430633 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" (UID: "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.439940 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data" (OuterVolumeSpecName: "config-data") pod "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" (UID: "679ad2bc-eced-4c08-8c45-29b7e4f6c3f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.499419 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfs8k\" (UniqueName: \"kubernetes.io/projected/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-kube-api-access-xfs8k\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.499456 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.499467 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.499476 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.804885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27pr" event={"ID":"679ad2bc-eced-4c08-8c45-29b7e4f6c3f8","Type":"ContainerDied","Data":"cf2ccca53f993227aeadbcd60c28efcae8ef2d7a8bc2060b5c7567d9b3e18cab"} Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.804925 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2ccca53f993227aeadbcd60c28efcae8ef2d7a8bc2060b5c7567d9b3e18cab" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.804980 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27pr" Feb 16 22:01:37 crc kubenswrapper[4792]: I0216 22:01:37.823525 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.029875 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.030233 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.064205 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.064471 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-log" containerID="cri-o://f13e3649579c967fdd513637498e965ec8a6b2456e69d4ddbfc74338524f2ab6" gracePeriod=30 Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.064692 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-api" containerID="cri-o://8d8daef035f53feeb68a3b036bf0b174214b18f91caa3d18f1983d16d0b0e111" gracePeriod=30 Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.095288 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.095755 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="05f62987-b755-4f2e-bbf9-8b8f09e81602" containerName="nova-scheduler-scheduler" containerID="cri-o://6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5" gracePeriod=30 Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.170332 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.819051 4792 generic.go:334] "Generic (PLEG): container finished" podID="a453f426-c933-4a61-bb74-096f6171f7de" containerID="f13e3649579c967fdd513637498e965ec8a6b2456e69d4ddbfc74338524f2ab6" exitCode=143 Feb 16 22:01:38 crc kubenswrapper[4792]: I0216 22:01:38.819159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerDied","Data":"f13e3649579c967fdd513637498e965ec8a6b2456e69d4ddbfc74338524f2ab6"} Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.352166 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.449925 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle\") pod \"05f62987-b755-4f2e-bbf9-8b8f09e81602\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.450280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tqtm\" (UniqueName: \"kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm\") pod \"05f62987-b755-4f2e-bbf9-8b8f09e81602\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.450931 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data\") pod \"05f62987-b755-4f2e-bbf9-8b8f09e81602\" (UID: \"05f62987-b755-4f2e-bbf9-8b8f09e81602\") " Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.459895 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm" (OuterVolumeSpecName: "kube-api-access-5tqtm") pod "05f62987-b755-4f2e-bbf9-8b8f09e81602" (UID: "05f62987-b755-4f2e-bbf9-8b8f09e81602"). InnerVolumeSpecName "kube-api-access-5tqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.493910 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05f62987-b755-4f2e-bbf9-8b8f09e81602" (UID: "05f62987-b755-4f2e-bbf9-8b8f09e81602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.495456 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data" (OuterVolumeSpecName: "config-data") pod "05f62987-b755-4f2e-bbf9-8b8f09e81602" (UID: "05f62987-b755-4f2e-bbf9-8b8f09e81602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.556224 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.556256 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f62987-b755-4f2e-bbf9-8b8f09e81602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.556269 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tqtm\" (UniqueName: \"kubernetes.io/projected/05f62987-b755-4f2e-bbf9-8b8f09e81602-kube-api-access-5tqtm\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.831968 4792 generic.go:334] "Generic (PLEG): container finished" podID="05f62987-b755-4f2e-bbf9-8b8f09e81602" containerID="6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5" exitCode=0 Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.832081 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05f62987-b755-4f2e-bbf9-8b8f09e81602","Type":"ContainerDied","Data":"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5"} Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.832133 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05f62987-b755-4f2e-bbf9-8b8f09e81602","Type":"ContainerDied","Data":"97586444c5506033059d17be4a3dc8a5cf4bafb35a3c4277325121989ec2ba0f"} Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.832149 4792 scope.go:117] "RemoveContainer" containerID="6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.832272 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" containerID="cri-o://eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5" gracePeriod=30 Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.832390 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" containerID="cri-o://495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052" gracePeriod=30 Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.834670 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.861152 4792 scope.go:117] "RemoveContainer" containerID="6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5" Feb 16 22:01:39 crc kubenswrapper[4792]: E0216 22:01:39.861664 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5\": container with ID starting with 6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5 not found: ID does not exist" containerID="6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.861688 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5"} err="failed to get container status \"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5\": rpc error: code = NotFound desc = could not find container \"6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5\": container with ID starting with 6961f774581d5366276c3150cc321c8c131769f5fa4e97133671ddc8990a9fc5 not found: ID does not exist" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.880489 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.920615 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.939275 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:39 crc kubenswrapper[4792]: E0216 22:01:39.940687 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="init" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.940770 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="init" Feb 16 22:01:39 crc kubenswrapper[4792]: E0216 22:01:39.940850 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" containerName="nova-manage" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.940906 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" containerName="nova-manage" Feb 16 22:01:39 crc kubenswrapper[4792]: E0216 22:01:39.940968 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f62987-b755-4f2e-bbf9-8b8f09e81602" containerName="nova-scheduler-scheduler" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.941017 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f62987-b755-4f2e-bbf9-8b8f09e81602" containerName="nova-scheduler-scheduler" Feb 16 22:01:39 crc kubenswrapper[4792]: E0216 22:01:39.941072 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="dnsmasq-dns" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.941119 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="dnsmasq-dns" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.941411 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" containerName="nova-manage" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.941485 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f62987-b755-4f2e-bbf9-8b8f09e81602" containerName="nova-scheduler-scheduler" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.941546 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e364d5-ecbf-44f2-872c-89ce9a2a35d1" containerName="dnsmasq-dns" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.942468 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.945384 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.965070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.965111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-config-data\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.965367 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl6c\" (UniqueName: \"kubernetes.io/projected/464ac62e-e668-417e-85ed-f8ddcee7ba19-kube-api-access-ztl6c\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:39 crc kubenswrapper[4792]: I0216 22:01:39.988880 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.045080 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f62987-b755-4f2e-bbf9-8b8f09e81602" path="/var/lib/kubelet/pods/05f62987-b755-4f2e-bbf9-8b8f09e81602/volumes" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.066882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztl6c\" (UniqueName: \"kubernetes.io/projected/464ac62e-e668-417e-85ed-f8ddcee7ba19-kube-api-access-ztl6c\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.066995 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.067017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-config-data\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.072777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-config-data\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.074812 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464ac62e-e668-417e-85ed-f8ddcee7ba19-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.085671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztl6c\" (UniqueName: \"kubernetes.io/projected/464ac62e-e668-417e-85ed-f8ddcee7ba19-kube-api-access-ztl6c\") pod \"nova-scheduler-0\" (UID: \"464ac62e-e668-417e-85ed-f8ddcee7ba19\") " pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.333149 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.844994 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerID="495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052" exitCode=143 Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.845400 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerDied","Data":"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052"} Feb 16 22:01:40 crc kubenswrapper[4792]: I0216 22:01:40.891875 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 22:01:40 crc kubenswrapper[4792]: W0216 22:01:40.894943 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464ac62e_e668_417e_85ed_f8ddcee7ba19.slice/crio-cb7e5888feb54f27c1b476130757faf5d15da91587045d67e09575423b0a8928 WatchSource:0}: Error finding container cb7e5888feb54f27c1b476130757faf5d15da91587045d67e09575423b0a8928: Status 404 returned error can't find the container with id cb7e5888feb54f27c1b476130757faf5d15da91587045d67e09575423b0a8928 Feb 16 22:01:41 crc kubenswrapper[4792]: I0216 22:01:41.857506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"464ac62e-e668-417e-85ed-f8ddcee7ba19","Type":"ContainerStarted","Data":"1f894d4acb962cc8bcaa20dd9fde513d087f6c340d64dd0e61c0c1a4b2b831cd"} Feb 16 22:01:41 crc kubenswrapper[4792]: I0216 22:01:41.857569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"464ac62e-e668-417e-85ed-f8ddcee7ba19","Type":"ContainerStarted","Data":"cb7e5888feb54f27c1b476130757faf5d15da91587045d67e09575423b0a8928"} Feb 16 22:01:41 crc kubenswrapper[4792]: I0216 22:01:41.887506 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.887487992 podStartE2EDuration="2.887487992s" podCreationTimestamp="2026-02-16 22:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:41.878815353 +0000 UTC m=+1434.532094254" watchObservedRunningTime="2026-02-16 22:01:41.887487992 +0000 UTC m=+1434.540766883" Feb 16 22:01:42 crc kubenswrapper[4792]: I0216 22:01:42.975163 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:49980->10.217.0.254:8775: read: connection reset by peer" Feb 16 22:01:42 crc kubenswrapper[4792]: I0216 22:01:42.975172 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:49990->10.217.0.254:8775: read: connection reset by peer" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.804792 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.881751 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerID="eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5" exitCode=0 Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.881856 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.881845 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerDied","Data":"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5"} Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.881904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7","Type":"ContainerDied","Data":"4464280adcbc7837b5176e7ff4e010ad3069b2d9b2484f6d55fe7c13cfc320ac"} Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.881927 4792 scope.go:117] "RemoveContainer" containerID="eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.884087 4792 generic.go:334] "Generic (PLEG): container finished" podID="a453f426-c933-4a61-bb74-096f6171f7de" containerID="8d8daef035f53feeb68a3b036bf0b174214b18f91caa3d18f1983d16d0b0e111" exitCode=0 Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.884116 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerDied","Data":"8d8daef035f53feeb68a3b036bf0b174214b18f91caa3d18f1983d16d0b0e111"} Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.915897 4792 scope.go:117] "RemoveContainer" containerID="495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.980093 4792 scope.go:117] "RemoveContainer" containerID="eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.981369 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swfz4\" (UniqueName: \"kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4\") pod \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.981420 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs\") pod \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.981463 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs\") pod \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.981707 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data\") pod \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.981804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle\") pod \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\" (UID: \"bf70292f-dd26-4ae5-b66f-6f2cc7473ef7\") " Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.982044 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs" (OuterVolumeSpecName: "logs") pod "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" (UID: "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:43 crc kubenswrapper[4792]: E0216 22:01:43.984273 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5\": container with ID starting with eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5 not found: ID does not exist" containerID="eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.984371 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5"} err="failed to get container status \"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5\": rpc error: code = NotFound desc = could not find container \"eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5\": container with ID starting with eadc59c45ef8d979967bf7f3d69d2b078f60a4d77d08f87de652140ea029efd5 not found: ID does not exist" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.984399 4792 scope.go:117] "RemoveContainer" containerID="495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052" Feb 16 22:01:43 crc kubenswrapper[4792]: E0216 22:01:43.985347 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052\": container with ID starting with 495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052 not found: ID does not exist" containerID="495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.985395 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052"} err="failed to get container status \"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052\": rpc error: code = NotFound desc = could not find container \"495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052\": container with ID starting with 495fab2fb798644148c6ba07d8e00fa10873f2eec67ae69449ef370bc3448052 not found: ID does not exist" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.986445 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:43 crc kubenswrapper[4792]: I0216 22:01:43.996127 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4" (OuterVolumeSpecName: "kube-api-access-swfz4") pod "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" (UID: "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7"). InnerVolumeSpecName "kube-api-access-swfz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.058126 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" (UID: "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.060278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data" (OuterVolumeSpecName: "config-data") pod "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" (UID: "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.066926 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" (UID: "bf70292f-dd26-4ae5-b66f-6f2cc7473ef7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.090969 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.091033 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swfz4\" (UniqueName: \"kubernetes.io/projected/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-kube-api-access-swfz4\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.091056 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.091083 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.170232 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.194069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.194125 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.194179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdb6c\" (UniqueName: \"kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.194928 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs" (OuterVolumeSpecName: "logs") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.195154 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453f426-c933-4a61-bb74-096f6171f7de-logs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.199532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c" (OuterVolumeSpecName: "kube-api-access-wdb6c") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "kube-api-access-wdb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.247719 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data" (OuterVolumeSpecName: "config-data") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.296418 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.296528 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.296635 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle\") pod \"a453f426-c933-4a61-bb74-096f6171f7de\" (UID: \"a453f426-c933-4a61-bb74-096f6171f7de\") " Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.297357 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.297385 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdb6c\" (UniqueName: \"kubernetes.io/projected/a453f426-c933-4a61-bb74-096f6171f7de-kube-api-access-wdb6c\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.334887 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.356333 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.367682 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: E0216 22:01:44.368305 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368330 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" Feb 16 22:01:44 crc kubenswrapper[4792]: E0216 22:01:44.368345 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368354 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" Feb 16 22:01:44 crc kubenswrapper[4792]: E0216 22:01:44.368396 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-api" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368405 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-api" Feb 16 22:01:44 crc kubenswrapper[4792]: E0216 22:01:44.368449 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-log" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368459 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-log" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368795 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-log" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368839 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-api" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368855 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a453f426-c933-4a61-bb74-096f6171f7de" containerName="nova-api-log" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.368867 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" containerName="nova-metadata-metadata" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.369784 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.370505 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.372963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.373395 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.383447 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.401909 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.428659 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.446813 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a453f426-c933-4a61-bb74-096f6171f7de" (UID: "a453f426-c933-4a61-bb74-096f6171f7de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.503678 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bc8f806-8d65-4035-9830-e7bf69083c19-logs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.503821 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-config-data\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.503851 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.503874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjwr\" (UniqueName: \"kubernetes.io/projected/6bc8f806-8d65-4035-9830-e7bf69083c19-kube-api-access-2rjwr\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.503927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.504132 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.504245 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453f426-c933-4a61-bb74-096f6171f7de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.607121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bc8f806-8d65-4035-9830-e7bf69083c19-logs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.607243 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-config-data\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.607271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.607289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjwr\" (UniqueName: \"kubernetes.io/projected/6bc8f806-8d65-4035-9830-e7bf69083c19-kube-api-access-2rjwr\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.607717 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bc8f806-8d65-4035-9830-e7bf69083c19-logs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.608080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.610939 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-config-data\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.612748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.615125 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc8f806-8d65-4035-9830-e7bf69083c19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.622800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjwr\" (UniqueName: \"kubernetes.io/projected/6bc8f806-8d65-4035-9830-e7bf69083c19-kube-api-access-2rjwr\") pod \"nova-metadata-0\" (UID: \"6bc8f806-8d65-4035-9830-e7bf69083c19\") " pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.780151 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.909639 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a453f426-c933-4a61-bb74-096f6171f7de","Type":"ContainerDied","Data":"a6b2bbe7bc268aaae7c1dc770e490360e59207b1ff77b28ad086acefe4ff2409"} Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.909695 4792 scope.go:117] "RemoveContainer" containerID="8d8daef035f53feeb68a3b036bf0b174214b18f91caa3d18f1983d16d0b0e111" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.909818 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.956376 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.972157 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.978928 4792 scope.go:117] "RemoveContainer" containerID="f13e3649579c967fdd513637498e965ec8a6b2456e69d4ddbfc74338524f2ab6" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.983399 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.987638 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.989567 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.990535 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 22:01:44 crc kubenswrapper[4792]: I0216 22:01:44.992341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.008040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.025811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-logs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.025963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.026026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.026297 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.026517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhv6\" (UniqueName: \"kubernetes.io/projected/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-kube-api-access-2dhv6\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.027001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-config-data\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130189 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhv6\" (UniqueName: \"kubernetes.io/projected/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-kube-api-access-2dhv6\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130399 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-config-data\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-logs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.130958 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-logs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.139281 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.139290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.139425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-config-data\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.139664 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.147539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhv6\" (UniqueName: \"kubernetes.io/projected/3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6-kube-api-access-2dhv6\") pod \"nova-api-0\" (UID: \"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6\") " pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.319988 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.334121 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.345349 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 22:01:45 crc kubenswrapper[4792]: W0216 22:01:45.348784 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bc8f806_8d65_4035_9830_e7bf69083c19.slice/crio-97a251eb8127fb1d5cceea27515250026f88d908b1c55261c53083a74fde591e WatchSource:0}: Error finding container 97a251eb8127fb1d5cceea27515250026f88d908b1c55261c53083a74fde591e: Status 404 returned error can't find the container with id 97a251eb8127fb1d5cceea27515250026f88d908b1c55261c53083a74fde591e Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.816479 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.924510 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6","Type":"ContainerStarted","Data":"3a75f52f88a95cfd3b4175187f1c09cda941783f6db9a0b5e57ef2de8abfde41"} Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.926826 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bc8f806-8d65-4035-9830-e7bf69083c19","Type":"ContainerStarted","Data":"d669e875432785f92c41496f78f09bb54f9116c9c0025b673883020866c47ec9"} Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.926869 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bc8f806-8d65-4035-9830-e7bf69083c19","Type":"ContainerStarted","Data":"f647cc2b56bc1b083a0146c1faadb94ec07861716b742d038c441e1be1980aa3"} Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.926879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bc8f806-8d65-4035-9830-e7bf69083c19","Type":"ContainerStarted","Data":"97a251eb8127fb1d5cceea27515250026f88d908b1c55261c53083a74fde591e"} Feb 16 22:01:45 crc kubenswrapper[4792]: I0216 22:01:45.948491 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.948472056 podStartE2EDuration="1.948472056s" podCreationTimestamp="2026-02-16 22:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:45.943336365 +0000 UTC m=+1438.596615256" watchObservedRunningTime="2026-02-16 22:01:45.948472056 +0000 UTC m=+1438.601750947" Feb 16 22:01:46 crc kubenswrapper[4792]: I0216 22:01:46.044204 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a453f426-c933-4a61-bb74-096f6171f7de" path="/var/lib/kubelet/pods/a453f426-c933-4a61-bb74-096f6171f7de/volumes" Feb 16 22:01:46 crc kubenswrapper[4792]: I0216 22:01:46.045784 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf70292f-dd26-4ae5-b66f-6f2cc7473ef7" path="/var/lib/kubelet/pods/bf70292f-dd26-4ae5-b66f-6f2cc7473ef7/volumes" Feb 16 22:01:46 crc kubenswrapper[4792]: I0216 22:01:46.943804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6","Type":"ContainerStarted","Data":"76bec97c74d3e245df0eb5dc080ae330af5506ead66a605e2ceead94aff94365"} Feb 16 22:01:46 crc kubenswrapper[4792]: I0216 22:01:46.944032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6","Type":"ContainerStarted","Data":"ed26571f6ab07ffa98f0ffbe106f23c65407ef58524fbc23d6a596b52ffee934"} Feb 16 22:01:46 crc kubenswrapper[4792]: I0216 22:01:46.966277 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.966263126 podStartE2EDuration="2.966263126s" podCreationTimestamp="2026-02-16 22:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:46.965167547 +0000 UTC m=+1439.618446438" watchObservedRunningTime="2026-02-16 22:01:46.966263126 +0000 UTC m=+1439.619542007" Feb 16 22:01:49 crc kubenswrapper[4792]: I0216 22:01:49.781019 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:01:49 crc kubenswrapper[4792]: I0216 22:01:49.781583 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 22:01:50 crc kubenswrapper[4792]: I0216 22:01:50.333418 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 22:01:50 crc kubenswrapper[4792]: I0216 22:01:50.369133 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 22:01:51 crc kubenswrapper[4792]: I0216 22:01:51.018697 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 22:01:54 crc kubenswrapper[4792]: I0216 22:01:54.781057 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 22:01:54 crc kubenswrapper[4792]: I0216 22:01:54.781504 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 22:01:55 crc kubenswrapper[4792]: I0216 22:01:55.330862 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:55 crc kubenswrapper[4792]: I0216 22:01:55.331277 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 22:01:55 crc kubenswrapper[4792]: I0216 22:01:55.797873 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6bc8f806-8d65-4035-9830-e7bf69083c19" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:55 crc kubenswrapper[4792]: I0216 22:01:55.798259 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6bc8f806-8d65-4035-9830-e7bf69083c19" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:56 crc kubenswrapper[4792]: I0216 22:01:56.322083 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 22:01:56 crc kubenswrapper[4792]: I0216 22:01:56.358800 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:01:56 crc kubenswrapper[4792]: I0216 22:01:56.358784 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.173324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.173975 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="97394c7a-06f3-433b-84dd-7ae885a8753d" containerName="kube-state-metrics" containerID="cri-o://bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd" gracePeriod=30 Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.322855 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.323094 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" containerName="mysqld-exporter" containerID="cri-o://ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3" gracePeriod=30 Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.902600 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.908449 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.993467 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk7zp\" (UniqueName: \"kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp\") pod \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.993551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2xx\" (UniqueName: \"kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx\") pod \"97394c7a-06f3-433b-84dd-7ae885a8753d\" (UID: \"97394c7a-06f3-433b-84dd-7ae885a8753d\") " Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.993634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle\") pod \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " Feb 16 22:02:01 crc kubenswrapper[4792]: I0216 22:02:01.993730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data\") pod \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\" (UID: \"f95cab6a-8fca-4a8e-b9eb-3d1751864411\") " Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.001089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp" (OuterVolumeSpecName: "kube-api-access-fk7zp") pod "f95cab6a-8fca-4a8e-b9eb-3d1751864411" (UID: "f95cab6a-8fca-4a8e-b9eb-3d1751864411"). InnerVolumeSpecName "kube-api-access-fk7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.002184 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx" (OuterVolumeSpecName: "kube-api-access-gq2xx") pod "97394c7a-06f3-433b-84dd-7ae885a8753d" (UID: "97394c7a-06f3-433b-84dd-7ae885a8753d"). InnerVolumeSpecName "kube-api-access-gq2xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.036616 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f95cab6a-8fca-4a8e-b9eb-3d1751864411" (UID: "f95cab6a-8fca-4a8e-b9eb-3d1751864411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.064236 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data" (OuterVolumeSpecName: "config-data") pod "f95cab6a-8fca-4a8e-b9eb-3d1751864411" (UID: "f95cab6a-8fca-4a8e-b9eb-3d1751864411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.097773 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk7zp\" (UniqueName: \"kubernetes.io/projected/f95cab6a-8fca-4a8e-b9eb-3d1751864411-kube-api-access-fk7zp\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.098144 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq2xx\" (UniqueName: \"kubernetes.io/projected/97394c7a-06f3-433b-84dd-7ae885a8753d-kube-api-access-gq2xx\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.098187 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.098207 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95cab6a-8fca-4a8e-b9eb-3d1751864411-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.119897 4792 generic.go:334] "Generic (PLEG): container finished" podID="97394c7a-06f3-433b-84dd-7ae885a8753d" containerID="bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd" exitCode=2 Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.119932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97394c7a-06f3-433b-84dd-7ae885a8753d","Type":"ContainerDied","Data":"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd"} Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.119973 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97394c7a-06f3-433b-84dd-7ae885a8753d","Type":"ContainerDied","Data":"6fd1a82d300eb2c649ce811c0e2caa1c9191a5c0cb945addf66c8981ce0a7b5f"} Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.119999 4792 scope.go:117] "RemoveContainer" containerID="bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.119945 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.122093 4792 generic.go:334] "Generic (PLEG): container finished" podID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" containerID="ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3" exitCode=2 Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.122123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f95cab6a-8fca-4a8e-b9eb-3d1751864411","Type":"ContainerDied","Data":"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3"} Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.122144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f95cab6a-8fca-4a8e-b9eb-3d1751864411","Type":"ContainerDied","Data":"b88dc8b32e2aa269e57905d68fdddf8eee80ab56825f99ed2c2dc86d59a19efd"} Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.122185 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.173395 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.191162 4792 scope.go:117] "RemoveContainer" containerID="bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd" Feb 16 22:02:02 crc kubenswrapper[4792]: E0216 22:02:02.192138 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd\": container with ID starting with bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd not found: ID does not exist" containerID="bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.192215 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd"} err="failed to get container status \"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd\": rpc error: code = NotFound desc = could not find container \"bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd\": container with ID starting with bc70986cf5797b01a3f42d4fcc24e7d9146da75733042b86425029112f57d4cd not found: ID does not exist" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.192280 4792 scope.go:117] "RemoveContainer" containerID="ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.200385 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.212349 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.224193 4792 scope.go:117] "RemoveContainer" containerID="ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.224286 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: E0216 22:02:02.224909 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3\": container with ID starting with ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3 not found: ID does not exist" containerID="ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.224933 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3"} err="failed to get container status \"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3\": rpc error: code = NotFound desc = could not find container \"ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3\": container with ID starting with ddd0a0a2f07d15f006b09e33b84995558eeb2e35395d7b29a5086d4e86f7bdd3 not found: ID does not exist" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.238853 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: E0216 22:02:02.239550 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" containerName="mysqld-exporter" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.239568 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" containerName="mysqld-exporter" Feb 16 22:02:02 crc kubenswrapper[4792]: E0216 22:02:02.239585 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97394c7a-06f3-433b-84dd-7ae885a8753d" containerName="kube-state-metrics" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.239594 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97394c7a-06f3-433b-84dd-7ae885a8753d" containerName="kube-state-metrics" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.239926 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97394c7a-06f3-433b-84dd-7ae885a8753d" containerName="kube-state-metrics" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.239967 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" containerName="mysqld-exporter" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.240988 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.242905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.243132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.249110 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.260273 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.262167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.265849 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.266200 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.296384 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.305940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.305997 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlcb\" (UniqueName: \"kubernetes.io/projected/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-api-access-tjlcb\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj65z\" (UniqueName: \"kubernetes.io/projected/b3131b03-f776-460c-9bd4-61398b8ba27a-kube-api-access-kj65z\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-config-data\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.306246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.325506 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.328500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.336280 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-config-data\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408805 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgwd\" (UniqueName: \"kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408894 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlcb\" (UniqueName: \"kubernetes.io/projected/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-api-access-tjlcb\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408950 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.408987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.409028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj65z\" (UniqueName: \"kubernetes.io/projected/b3131b03-f776-460c-9bd4-61398b8ba27a-kube-api-access-kj65z\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.409044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.414762 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.414874 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.415452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.415803 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.429140 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3131b03-f776-460c-9bd4-61398b8ba27a-config-data\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.429468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd434b09-606a-45c0-8b54-2fbf907587f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.432367 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj65z\" (UniqueName: \"kubernetes.io/projected/b3131b03-f776-460c-9bd4-61398b8ba27a-kube-api-access-kj65z\") pod \"mysqld-exporter-0\" (UID: \"b3131b03-f776-460c-9bd4-61398b8ba27a\") " pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.437897 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlcb\" (UniqueName: \"kubernetes.io/projected/dd434b09-606a-45c0-8b54-2fbf907587f7-kube-api-access-tjlcb\") pod \"kube-state-metrics-0\" (UID: \"dd434b09-606a-45c0-8b54-2fbf907587f7\") " pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.510688 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.510872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.511475 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.511696 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.511953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgwd\" (UniqueName: \"kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.532359 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgwd\" (UniqueName: \"kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd\") pod \"redhat-operators-rdz7d\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.659253 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.668953 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 22:02:02 crc kubenswrapper[4792]: I0216 22:02:02.683101 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:03 crc kubenswrapper[4792]: I0216 22:02:03.535193 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 22:02:03 crc kubenswrapper[4792]: I0216 22:02:03.676508 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 22:02:03 crc kubenswrapper[4792]: I0216 22:02:03.729486 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:03 crc kubenswrapper[4792]: W0216 22:02:03.761888 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83834f34_f8af_43c2_8ae0_1e48248d88e9.slice/crio-7405535a84f60b0de4d9f0435768665843cb0b507a1cec2358c1c14a2eca9158 WatchSource:0}: Error finding container 7405535a84f60b0de4d9f0435768665843cb0b507a1cec2358c1c14a2eca9158: Status 404 returned error can't find the container with id 7405535a84f60b0de4d9f0435768665843cb0b507a1cec2358c1c14a2eca9158 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.044658 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97394c7a-06f3-433b-84dd-7ae885a8753d" path="/var/lib/kubelet/pods/97394c7a-06f3-433b-84dd-7ae885a8753d/volumes" Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.046027 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95cab6a-8fca-4a8e-b9eb-3d1751864411" path="/var/lib/kubelet/pods/f95cab6a-8fca-4a8e-b9eb-3d1751864411/volumes" Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.189193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b3131b03-f776-460c-9bd4-61398b8ba27a","Type":"ContainerStarted","Data":"6d4eb96d44924141f14e105c0b290236884f13e8f449115ffcdf630e1855c215"} Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.194259 4792 generic.go:334] "Generic (PLEG): container finished" podID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerID="ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d" exitCode=0 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.194780 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerDied","Data":"ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d"} Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.194843 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerStarted","Data":"7405535a84f60b0de4d9f0435768665843cb0b507a1cec2358c1c14a2eca9158"} Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.199225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd434b09-606a-45c0-8b54-2fbf907587f7","Type":"ContainerStarted","Data":"6f0d68e9fd8cf87e9e069bbae488c6a8fc983ef683d398e4426c519b40c0d042"} Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.314454 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.314849 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-central-agent" containerID="cri-o://d8bc4c1309a043b098301ff393835f93cb3f5a778eda3b7f7931f203a6376090" gracePeriod=30 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.316489 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="proxy-httpd" containerID="cri-o://9fa9789850138c8517dfcedfe1757bffb1b0cb1dc2da7372da448279b0c15f2a" gracePeriod=30 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.316616 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="sg-core" containerID="cri-o://f3bdbe05344c97a2f3174ba5363690898c8acf77b7e31a54a2bf96b1b80ba86d" gracePeriod=30 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.316672 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-notification-agent" containerID="cri-o://04a28427d4bfaafbc3e454c69d8cc18eb9e0841c5ee0c454fb0a4103808bfcf0" gracePeriod=30 Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.812157 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.812660 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 22:02:04 crc kubenswrapper[4792]: I0216 22:02:04.821317 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.210586 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerStarted","Data":"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.212482 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dd434b09-606a-45c0-8b54-2fbf907587f7","Type":"ContainerStarted","Data":"d0ae55b9869f49fa280d357b88680f3e0139bc5e447fed5bda3416fac69fbf27"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.212649 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.214321 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b3131b03-f776-460c-9bd4-61398b8ba27a","Type":"ContainerStarted","Data":"e62840ae7480d947a5018cdeb0601cec69a279174fa3a6637c891c752cf8397e"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218096 4792 generic.go:334] "Generic (PLEG): container finished" podID="011957d1-61c8-444f-a365-4382969bbd58" containerID="9fa9789850138c8517dfcedfe1757bffb1b0cb1dc2da7372da448279b0c15f2a" exitCode=0 Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218131 4792 generic.go:334] "Generic (PLEG): container finished" podID="011957d1-61c8-444f-a365-4382969bbd58" containerID="f3bdbe05344c97a2f3174ba5363690898c8acf77b7e31a54a2bf96b1b80ba86d" exitCode=2 Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218144 4792 generic.go:334] "Generic (PLEG): container finished" podID="011957d1-61c8-444f-a365-4382969bbd58" containerID="d8bc4c1309a043b098301ff393835f93cb3f5a778eda3b7f7931f203a6376090" exitCode=0 Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerDied","Data":"9fa9789850138c8517dfcedfe1757bffb1b0cb1dc2da7372da448279b0c15f2a"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218565 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerDied","Data":"f3bdbe05344c97a2f3174ba5363690898c8acf77b7e31a54a2bf96b1b80ba86d"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.218577 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerDied","Data":"d8bc4c1309a043b098301ff393835f93cb3f5a778eda3b7f7931f203a6376090"} Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.238295 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.258668 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.759424709 podStartE2EDuration="3.258649148s" podCreationTimestamp="2026-02-16 22:02:02 +0000 UTC" firstStartedPulling="2026-02-16 22:02:03.664806504 +0000 UTC m=+1456.318085395" lastFinishedPulling="2026-02-16 22:02:04.164030943 +0000 UTC m=+1456.817309834" observedRunningTime="2026-02-16 22:02:05.255532363 +0000 UTC m=+1457.908811274" watchObservedRunningTime="2026-02-16 22:02:05.258649148 +0000 UTC m=+1457.911928039" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.315711 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.864942902 podStartE2EDuration="3.315694318s" podCreationTimestamp="2026-02-16 22:02:02 +0000 UTC" firstStartedPulling="2026-02-16 22:02:03.53713885 +0000 UTC m=+1456.190417741" lastFinishedPulling="2026-02-16 22:02:03.987890266 +0000 UTC m=+1456.641169157" observedRunningTime="2026-02-16 22:02:05.294145715 +0000 UTC m=+1457.947424606" watchObservedRunningTime="2026-02-16 22:02:05.315694318 +0000 UTC m=+1457.968973209" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.373731 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.376222 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.431987 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 22:02:05 crc kubenswrapper[4792]: I0216 22:02:05.444662 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 22:02:06 crc kubenswrapper[4792]: I0216 22:02:06.229203 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 22:02:06 crc kubenswrapper[4792]: I0216 22:02:06.234977 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.243560 4792 generic.go:334] "Generic (PLEG): container finished" podID="011957d1-61c8-444f-a365-4382969bbd58" containerID="04a28427d4bfaafbc3e454c69d8cc18eb9e0841c5ee0c454fb0a4103808bfcf0" exitCode=0 Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.243633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerDied","Data":"04a28427d4bfaafbc3e454c69d8cc18eb9e0841c5ee0c454fb0a4103808bfcf0"} Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.792334 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859217 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbqrd\" (UniqueName: \"kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859536 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859631 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859675 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.859738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts\") pod \"011957d1-61c8-444f-a365-4382969bbd58\" (UID: \"011957d1-61c8-444f-a365-4382969bbd58\") " Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.860089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.860100 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.864295 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.864330 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011957d1-61c8-444f-a365-4382969bbd58-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.893563 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts" (OuterVolumeSpecName: "scripts") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.896376 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd" (OuterVolumeSpecName: "kube-api-access-zbqrd") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "kube-api-access-zbqrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.899582 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.967189 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.967222 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbqrd\" (UniqueName: \"kubernetes.io/projected/011957d1-61c8-444f-a365-4382969bbd58-kube-api-access-zbqrd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:07 crc kubenswrapper[4792]: I0216 22:02:07.967234 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.018738 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data" (OuterVolumeSpecName: "config-data") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.020724 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "011957d1-61c8-444f-a365-4382969bbd58" (UID: "011957d1-61c8-444f-a365-4382969bbd58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.069704 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.069746 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011957d1-61c8-444f-a365-4382969bbd58-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.257059 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.257064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011957d1-61c8-444f-a365-4382969bbd58","Type":"ContainerDied","Data":"e38dc71c27d57d215db778cf9cb7dfcc8bb7585d1d679c2b459a12b6590e1c98"} Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.258091 4792 scope.go:117] "RemoveContainer" containerID="9fa9789850138c8517dfcedfe1757bffb1b0cb1dc2da7372da448279b0c15f2a" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.294167 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.305067 4792 scope.go:117] "RemoveContainer" containerID="f3bdbe05344c97a2f3174ba5363690898c8acf77b7e31a54a2bf96b1b80ba86d" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.315569 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.328659 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:08 crc kubenswrapper[4792]: E0216 22:02:08.329498 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-notification-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.329579 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-notification-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: E0216 22:02:08.329658 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="proxy-httpd" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.329712 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="proxy-httpd" Feb 16 22:02:08 crc kubenswrapper[4792]: E0216 22:02:08.329794 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-central-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.329848 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-central-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: E0216 22:02:08.329911 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="sg-core" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.329966 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="sg-core" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.330233 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-notification-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.330315 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="proxy-httpd" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.330382 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="ceilometer-central-agent" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.330435 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="011957d1-61c8-444f-a365-4382969bbd58" containerName="sg-core" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.332362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.341227 4792 scope.go:117] "RemoveContainer" containerID="04a28427d4bfaafbc3e454c69d8cc18eb9e0841c5ee0c454fb0a4103808bfcf0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.341629 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.341878 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.342095 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.363809 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.376790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.376857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.376886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.376923 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.376994 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.377016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24sd\" (UniqueName: \"kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.377097 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.377147 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.381712 4792 scope.go:117] "RemoveContainer" containerID="d8bc4c1309a043b098301ff393835f93cb3f5a778eda3b7f7931f203a6376090" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479276 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479400 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24sd\" (UniqueName: \"kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.479559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.481561 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.481589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.485916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.486020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.487188 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.488323 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.489523 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.505303 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24sd\" (UniqueName: \"kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd\") pod \"ceilometer-0\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " pod="openstack/ceilometer-0" Feb 16 22:02:08 crc kubenswrapper[4792]: I0216 22:02:08.666672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:09 crc kubenswrapper[4792]: I0216 22:02:09.322300 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:10 crc kubenswrapper[4792]: I0216 22:02:10.045753 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011957d1-61c8-444f-a365-4382969bbd58" path="/var/lib/kubelet/pods/011957d1-61c8-444f-a365-4382969bbd58/volumes" Feb 16 22:02:10 crc kubenswrapper[4792]: I0216 22:02:10.277706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerStarted","Data":"9705c41c707da8a50b5b6773148a1b7f4f02cd3a0f61ab49d9f9e00aa04e05fb"} Feb 16 22:02:10 crc kubenswrapper[4792]: I0216 22:02:10.277749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerStarted","Data":"1b6082b02019ded2fcaa191936516d8a6a8ea845c220e67f76931e68a85c578d"} Feb 16 22:02:11 crc kubenswrapper[4792]: I0216 22:02:11.290961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerStarted","Data":"06500abde444926febab2ec9ec4556109c1db4ad34da17b7a4a786828171f0c9"} Feb 16 22:02:11 crc kubenswrapper[4792]: I0216 22:02:11.295550 4792 generic.go:334] "Generic (PLEG): container finished" podID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerID="3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33" exitCode=0 Feb 16 22:02:11 crc kubenswrapper[4792]: I0216 22:02:11.295619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerDied","Data":"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33"} Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.308946 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerStarted","Data":"d6182231a63fd22444e5df4359d77549e0965a8589ea71c9138bcd1551e912b0"} Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.311004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerStarted","Data":"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551"} Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.341616 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rdz7d" podStartSLOduration=2.832076769 podStartE2EDuration="10.341581941s" podCreationTimestamp="2026-02-16 22:02:02 +0000 UTC" firstStartedPulling="2026-02-16 22:02:04.197470133 +0000 UTC m=+1456.850749024" lastFinishedPulling="2026-02-16 22:02:11.706975305 +0000 UTC m=+1464.360254196" observedRunningTime="2026-02-16 22:02:12.332666405 +0000 UTC m=+1464.985945336" watchObservedRunningTime="2026-02-16 22:02:12.341581941 +0000 UTC m=+1464.994860832" Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.675825 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.683958 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:12 crc kubenswrapper[4792]: I0216 22:02:12.683989 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:13 crc kubenswrapper[4792]: I0216 22:02:13.323509 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerStarted","Data":"8d123a688b2a96c4c26063f8e19518d0a3687a7608b9e2a52d3f75d2f438c4ea"} Feb 16 22:02:13 crc kubenswrapper[4792]: I0216 22:02:13.324104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 22:02:13 crc kubenswrapper[4792]: I0216 22:02:13.347215 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.012511601 podStartE2EDuration="5.347193216s" podCreationTimestamp="2026-02-16 22:02:08 +0000 UTC" firstStartedPulling="2026-02-16 22:02:09.330339757 +0000 UTC m=+1461.983618648" lastFinishedPulling="2026-02-16 22:02:12.665021372 +0000 UTC m=+1465.318300263" observedRunningTime="2026-02-16 22:02:13.340204284 +0000 UTC m=+1465.993483175" watchObservedRunningTime="2026-02-16 22:02:13.347193216 +0000 UTC m=+1466.000472107" Feb 16 22:02:13 crc kubenswrapper[4792]: I0216 22:02:13.755852 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rdz7d" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" probeResult="failure" output=< Feb 16 22:02:13 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:02:13 crc kubenswrapper[4792]: > Feb 16 22:02:23 crc kubenswrapper[4792]: I0216 22:02:23.731000 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rdz7d" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" probeResult="failure" output=< Feb 16 22:02:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:02:23 crc kubenswrapper[4792]: > Feb 16 22:02:31 crc kubenswrapper[4792]: I0216 22:02:31.532386 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:02:31 crc kubenswrapper[4792]: I0216 22:02:31.532771 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:02:32 crc kubenswrapper[4792]: I0216 22:02:32.741536 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:32 crc kubenswrapper[4792]: I0216 22:02:32.794228 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:33 crc kubenswrapper[4792]: I0216 22:02:33.477354 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:34 crc kubenswrapper[4792]: I0216 22:02:34.647588 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rdz7d" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" containerID="cri-o://a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551" gracePeriod=2 Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.156032 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.253861 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content\") pod \"83834f34-f8af-43c2-8ae0-1e48248d88e9\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.254042 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmgwd\" (UniqueName: \"kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd\") pod \"83834f34-f8af-43c2-8ae0-1e48248d88e9\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.254127 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities\") pod \"83834f34-f8af-43c2-8ae0-1e48248d88e9\" (UID: \"83834f34-f8af-43c2-8ae0-1e48248d88e9\") " Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.254774 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities" (OuterVolumeSpecName: "utilities") pod "83834f34-f8af-43c2-8ae0-1e48248d88e9" (UID: "83834f34-f8af-43c2-8ae0-1e48248d88e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.255204 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.259370 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd" (OuterVolumeSpecName: "kube-api-access-jmgwd") pod "83834f34-f8af-43c2-8ae0-1e48248d88e9" (UID: "83834f34-f8af-43c2-8ae0-1e48248d88e9"). InnerVolumeSpecName "kube-api-access-jmgwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.357590 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmgwd\" (UniqueName: \"kubernetes.io/projected/83834f34-f8af-43c2-8ae0-1e48248d88e9-kube-api-access-jmgwd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.398102 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83834f34-f8af-43c2-8ae0-1e48248d88e9" (UID: "83834f34-f8af-43c2-8ae0-1e48248d88e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.460148 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83834f34-f8af-43c2-8ae0-1e48248d88e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.662928 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdz7d" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.663868 4792 generic.go:334] "Generic (PLEG): container finished" podID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerID="a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551" exitCode=0 Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.663918 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerDied","Data":"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551"} Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.663983 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdz7d" event={"ID":"83834f34-f8af-43c2-8ae0-1e48248d88e9","Type":"ContainerDied","Data":"7405535a84f60b0de4d9f0435768665843cb0b507a1cec2358c1c14a2eca9158"} Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.664016 4792 scope.go:117] "RemoveContainer" containerID="a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.695489 4792 scope.go:117] "RemoveContainer" containerID="3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.706180 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.728767 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rdz7d"] Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.729203 4792 scope.go:117] "RemoveContainer" containerID="ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.787259 4792 scope.go:117] "RemoveContainer" containerID="a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551" Feb 16 22:02:35 crc kubenswrapper[4792]: E0216 22:02:35.788219 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551\": container with ID starting with a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551 not found: ID does not exist" containerID="a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.788271 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551"} err="failed to get container status \"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551\": rpc error: code = NotFound desc = could not find container \"a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551\": container with ID starting with a08156b4415b9e9b73616f9539ab373326a3129f252e97d5091057abe7fa5551 not found: ID does not exist" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.788303 4792 scope.go:117] "RemoveContainer" containerID="3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33" Feb 16 22:02:35 crc kubenswrapper[4792]: E0216 22:02:35.788816 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33\": container with ID starting with 3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33 not found: ID does not exist" containerID="3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.788912 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33"} err="failed to get container status \"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33\": rpc error: code = NotFound desc = could not find container \"3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33\": container with ID starting with 3c3ca3d61b82fc8bcede47779aca28a98baf62ed04e88c58a29716e4ae5f4d33 not found: ID does not exist" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.788992 4792 scope.go:117] "RemoveContainer" containerID="ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d" Feb 16 22:02:35 crc kubenswrapper[4792]: E0216 22:02:35.789465 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d\": container with ID starting with ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d not found: ID does not exist" containerID="ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d" Feb 16 22:02:35 crc kubenswrapper[4792]: I0216 22:02:35.789537 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d"} err="failed to get container status \"ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d\": rpc error: code = NotFound desc = could not find container \"ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d\": container with ID starting with ecf80ed209c4f44f0eabea12eabf6300a361113ef91690805021bfc11b3dcc3d not found: ID does not exist" Feb 16 22:02:36 crc kubenswrapper[4792]: I0216 22:02:36.050310 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" path="/var/lib/kubelet/pods/83834f34-f8af-43c2-8ae0-1e48248d88e9/volumes" Feb 16 22:02:38 crc kubenswrapper[4792]: I0216 22:02:38.683535 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.553135 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-njp9q"] Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.568959 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-njp9q"] Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.674455 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jndsb"] Feb 16 22:02:49 crc kubenswrapper[4792]: E0216 22:02:49.675018 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="extract-utilities" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.675035 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="extract-utilities" Feb 16 22:02:49 crc kubenswrapper[4792]: E0216 22:02:49.675049 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="extract-content" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.675056 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="extract-content" Feb 16 22:02:49 crc kubenswrapper[4792]: E0216 22:02:49.675067 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.675073 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.675320 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="83834f34-f8af-43c2-8ae0-1e48248d88e9" containerName="registry-server" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.676204 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.690021 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jndsb"] Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.823269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-config-data\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.823591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxv4r\" (UniqueName: \"kubernetes.io/projected/c7d886e6-27ad-48f2-a820-76ae43892a4f-kube-api-access-hxv4r\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.823797 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-combined-ca-bundle\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.928968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-combined-ca-bundle\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.929265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-config-data\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.929312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxv4r\" (UniqueName: \"kubernetes.io/projected/c7d886e6-27ad-48f2-a820-76ae43892a4f-kube-api-access-hxv4r\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.940744 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-combined-ca-bundle\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.942825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d886e6-27ad-48f2-a820-76ae43892a4f-config-data\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:49 crc kubenswrapper[4792]: I0216 22:02:49.950186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxv4r\" (UniqueName: \"kubernetes.io/projected/c7d886e6-27ad-48f2-a820-76ae43892a4f-kube-api-access-hxv4r\") pod \"heat-db-sync-jndsb\" (UID: \"c7d886e6-27ad-48f2-a820-76ae43892a4f\") " pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:50 crc kubenswrapper[4792]: I0216 22:02:50.004193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jndsb" Feb 16 22:02:50 crc kubenswrapper[4792]: I0216 22:02:50.062309 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d59609-2910-4114-98d4-0f5154b95b1b" path="/var/lib/kubelet/pods/72d59609-2910-4114-98d4-0f5154b95b1b/volumes" Feb 16 22:02:50 crc kubenswrapper[4792]: I0216 22:02:50.520255 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jndsb"] Feb 16 22:02:50 crc kubenswrapper[4792]: I0216 22:02:50.529199 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:02:50 crc kubenswrapper[4792]: E0216 22:02:50.647212 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:02:50 crc kubenswrapper[4792]: E0216 22:02:50.647547 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:02:50 crc kubenswrapper[4792]: E0216 22:02:50.647708 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:02:50 crc kubenswrapper[4792]: E0216 22:02:50.649237 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:02:50 crc kubenswrapper[4792]: I0216 22:02:50.821382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jndsb" event={"ID":"c7d886e6-27ad-48f2-a820-76ae43892a4f","Type":"ContainerStarted","Data":"a660d2a5090de1bf344e4e2eed428abd07e592e3dc3913d2bdd7834439cb34fa"} Feb 16 22:02:50 crc kubenswrapper[4792]: E0216 22:02:50.823444 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.524946 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.525260 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-central-agent" containerID="cri-o://9705c41c707da8a50b5b6773148a1b7f4f02cd3a0f61ab49d9f9e00aa04e05fb" gracePeriod=30 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.525277 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="proxy-httpd" containerID="cri-o://8d123a688b2a96c4c26063f8e19518d0a3687a7608b9e2a52d3f75d2f438c4ea" gracePeriod=30 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.525315 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="sg-core" containerID="cri-o://d6182231a63fd22444e5df4359d77549e0965a8589ea71c9138bcd1551e912b0" gracePeriod=30 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.525369 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-notification-agent" containerID="cri-o://06500abde444926febab2ec9ec4556109c1db4ad34da17b7a4a786828171f0c9" gracePeriod=30 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.588520 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.833742 4792 generic.go:334] "Generic (PLEG): container finished" podID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerID="8d123a688b2a96c4c26063f8e19518d0a3687a7608b9e2a52d3f75d2f438c4ea" exitCode=0 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.833775 4792 generic.go:334] "Generic (PLEG): container finished" podID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerID="d6182231a63fd22444e5df4359d77549e0965a8589ea71c9138bcd1551e912b0" exitCode=2 Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.833804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerDied","Data":"8d123a688b2a96c4c26063f8e19518d0a3687a7608b9e2a52d3f75d2f438c4ea"} Feb 16 22:02:51 crc kubenswrapper[4792]: I0216 22:02:51.833861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerDied","Data":"d6182231a63fd22444e5df4359d77549e0965a8589ea71c9138bcd1551e912b0"} Feb 16 22:02:51 crc kubenswrapper[4792]: E0216 22:02:51.835394 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:02:52 crc kubenswrapper[4792]: I0216 22:02:52.848108 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:02:52 crc kubenswrapper[4792]: I0216 22:02:52.862433 4792 generic.go:334] "Generic (PLEG): container finished" podID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerID="9705c41c707da8a50b5b6773148a1b7f4f02cd3a0f61ab49d9f9e00aa04e05fb" exitCode=0 Feb 16 22:02:52 crc kubenswrapper[4792]: I0216 22:02:52.862474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerDied","Data":"9705c41c707da8a50b5b6773148a1b7f4f02cd3a0f61ab49d9f9e00aa04e05fb"} Feb 16 22:02:53 crc kubenswrapper[4792]: I0216 22:02:53.879214 4792 generic.go:334] "Generic (PLEG): container finished" podID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerID="06500abde444926febab2ec9ec4556109c1db4ad34da17b7a4a786828171f0c9" exitCode=0 Feb 16 22:02:53 crc kubenswrapper[4792]: I0216 22:02:53.879276 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerDied","Data":"06500abde444926febab2ec9ec4556109c1db4ad34da17b7a4a786828171f0c9"} Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.057366 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226661 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226801 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226847 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24sd\" (UniqueName: \"kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.226955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.227033 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd\") pod \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\" (UID: \"30f01c08-5d23-45c9-8de3-280a8f9e8c8e\") " Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.227132 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.227809 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.227823 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.234138 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts" (OuterVolumeSpecName: "scripts") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.236044 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd" (OuterVolumeSpecName: "kube-api-access-n24sd") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "kube-api-access-n24sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.275388 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.329349 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.329379 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.329388 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24sd\" (UniqueName: \"kubernetes.io/projected/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-kube-api-access-n24sd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.329400 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.341593 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.351826 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.374553 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data" (OuterVolumeSpecName: "config-data") pod "30f01c08-5d23-45c9-8de3-280a8f9e8c8e" (UID: "30f01c08-5d23-45c9-8de3-280a8f9e8c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.431486 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.431851 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.431867 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f01c08-5d23-45c9-8de3-280a8f9e8c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.892927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f01c08-5d23-45c9-8de3-280a8f9e8c8e","Type":"ContainerDied","Data":"1b6082b02019ded2fcaa191936516d8a6a8ea845c220e67f76931e68a85c578d"} Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.892982 4792 scope.go:117] "RemoveContainer" containerID="8d123a688b2a96c4c26063f8e19518d0a3687a7608b9e2a52d3f75d2f438c4ea" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.893002 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.945647 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.948328 4792 scope.go:117] "RemoveContainer" containerID="d6182231a63fd22444e5df4359d77549e0965a8589ea71c9138bcd1551e912b0" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.968209 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.979117 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:54 crc kubenswrapper[4792]: E0216 22:02:54.979756 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-central-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.979772 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-central-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: E0216 22:02:54.979804 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-notification-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.979811 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-notification-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: E0216 22:02:54.979827 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="sg-core" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.979835 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="sg-core" Feb 16 22:02:54 crc kubenswrapper[4792]: E0216 22:02:54.979849 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="proxy-httpd" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.979855 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="proxy-httpd" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.980118 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="proxy-httpd" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.980131 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="sg-core" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.980143 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-central-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.980155 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" containerName="ceilometer-notification-agent" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.982242 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.985510 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.987203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.987425 4792 scope.go:117] "RemoveContainer" containerID="06500abde444926febab2ec9ec4556109c1db4ad34da17b7a4a786828171f0c9" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.987648 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 22:02:54 crc kubenswrapper[4792]: I0216 22:02:54.996417 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.036040 4792 scope.go:117] "RemoveContainer" containerID="9705c41c707da8a50b5b6773148a1b7f4f02cd3a0f61ab49d9f9e00aa04e05fb" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.148330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-config-data\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.148706 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.148811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-run-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.148942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.149089 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-scripts\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.149199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.149313 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-log-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.149403 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gt5\" (UniqueName: \"kubernetes.io/projected/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-kube-api-access-r8gt5\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gt5\" (UniqueName: \"kubernetes.io/projected/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-kube-api-access-r8gt5\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252232 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-config-data\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252266 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-run-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-scripts\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-log-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-run-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.252948 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-log-httpd\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.257431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-scripts\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.258209 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-config-data\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.258842 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.264163 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.278676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.284555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gt5\" (UniqueName: \"kubernetes.io/projected/e58723ee-d9c2-4b71-b072-3cf7b2a26c12-kube-api-access-r8gt5\") pod \"ceilometer-0\" (UID: \"e58723ee-d9c2-4b71-b072-3cf7b2a26c12\") " pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.303352 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.821152 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 22:02:55 crc kubenswrapper[4792]: I0216 22:02:55.927756 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e58723ee-d9c2-4b71-b072-3cf7b2a26c12","Type":"ContainerStarted","Data":"6e5406db129c57caa3c64358a97e80e3dbb8605d55e99cc5b81d532cfebea323"} Feb 16 22:02:55 crc kubenswrapper[4792]: E0216 22:02:55.959646 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:02:55 crc kubenswrapper[4792]: E0216 22:02:55.959764 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:02:55 crc kubenswrapper[4792]: E0216 22:02:55.960013 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:02:56 crc kubenswrapper[4792]: I0216 22:02:56.038128 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f01c08-5d23-45c9-8de3-280a8f9e8c8e" path="/var/lib/kubelet/pods/30f01c08-5d23-45c9-8de3-280a8f9e8c8e/volumes" Feb 16 22:02:56 crc kubenswrapper[4792]: I0216 22:02:56.291097 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="rabbitmq" containerID="cri-o://9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08" gracePeriod=604796 Feb 16 22:02:56 crc kubenswrapper[4792]: I0216 22:02:56.943308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e58723ee-d9c2-4b71-b072-3cf7b2a26c12","Type":"ContainerStarted","Data":"8ea176e79e5460bfebe5ab6e6894a401cb49d89cc53fe163fed1e6c275ae88c8"} Feb 16 22:02:57 crc kubenswrapper[4792]: I0216 22:02:57.461110 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="rabbitmq" containerID="cri-o://a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b" gracePeriod=604796 Feb 16 22:02:57 crc kubenswrapper[4792]: I0216 22:02:57.959643 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e58723ee-d9c2-4b71-b072-3cf7b2a26c12","Type":"ContainerStarted","Data":"70f3dee99c1155898593e3fccc28de238f05beb726ef4269ca31c5183011471b"} Feb 16 22:02:59 crc kubenswrapper[4792]: E0216 22:02:59.246867 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:02:59 crc kubenswrapper[4792]: I0216 22:02:59.330211 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 22:02:59 crc kubenswrapper[4792]: I0216 22:02:59.633330 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 16 22:02:59 crc kubenswrapper[4792]: I0216 22:02:59.990500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e58723ee-d9c2-4b71-b072-3cf7b2a26c12","Type":"ContainerStarted","Data":"f889248724f177b972ba7dfcd96d857adaf5300697ee0a43801cd2eb540e463a"} Feb 16 22:02:59 crc kubenswrapper[4792]: I0216 22:02:59.992383 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 22:02:59 crc kubenswrapper[4792]: E0216 22:02:59.993846 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:01 crc kubenswrapper[4792]: E0216 22:03:01.006673 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:01 crc kubenswrapper[4792]: I0216 22:03:01.532805 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:03:01 crc kubenswrapper[4792]: I0216 22:03:01.532871 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.905474 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.968974 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.969131 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.969182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.969306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.969476 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.969802 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.970219 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.972144 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8ch\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.972581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.973096 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.973862 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf\") pod \"a04fbeec-860c-4b22-b88d-087872b64e62\" (UID: \"a04fbeec-860c-4b22-b88d-087872b64e62\") " Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.973420 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.974972 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.978015 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.979085 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.979122 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.979141 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.986779 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info" (OuterVolumeSpecName: "pod-info") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.988659 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:02 crc kubenswrapper[4792]: I0216 22:03:02.996095 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch" (OuterVolumeSpecName: "kube-api-access-ln8ch") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "kube-api-access-ln8ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.009326 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.046089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data" (OuterVolumeSpecName: "config-data") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.046568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e" (OuterVolumeSpecName: "persistence") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.080976 4792 generic.go:334] "Generic (PLEG): container finished" podID="a04fbeec-860c-4b22-b88d-087872b64e62" containerID="9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08" exitCode=0 Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.081039 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerDied","Data":"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08"} Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.081068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a04fbeec-860c-4b22-b88d-087872b64e62","Type":"ContainerDied","Data":"e7de349a9866bcba073cd393c7db26068162da598eeec123b1c269bea2d105b3"} Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.081085 4792 scope.go:117] "RemoveContainer" containerID="9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.081292 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084051 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a04fbeec-860c-4b22-b88d-087872b64e62-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084072 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084130 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8ch\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-kube-api-access-ln8ch\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084166 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") on node \"crc\" " Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084189 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a04fbeec-860c-4b22-b88d-087872b64e62-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.084199 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.132507 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.132702 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e") on node "crc" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.150963 4792 scope.go:117] "RemoveContainer" containerID="dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.152870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf" (OuterVolumeSpecName: "server-conf") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.185569 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.185658 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.185764 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.185960 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a04fbeec-860c-4b22-b88d-087872b64e62-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.185990 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.186876 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.211675 4792 scope.go:117] "RemoveContainer" containerID="9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.213160 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08\": container with ID starting with 9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08 not found: ID does not exist" containerID="9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.213277 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08"} err="failed to get container status \"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08\": rpc error: code = NotFound desc = could not find container \"9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08\": container with ID starting with 9eca3946db40189c7ff6b75578e2b8da5d1b7b3e5ff92e58578137d588885a08 not found: ID does not exist" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.213402 4792 scope.go:117] "RemoveContainer" containerID="dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.215360 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2\": container with ID starting with dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2 not found: ID does not exist" containerID="dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.215409 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2"} err="failed to get container status \"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2\": rpc error: code = NotFound desc = could not find container \"dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2\": container with ID starting with dc7b2453f172173d753798d7f0510efabf372685837b4f1f0392a4ff82dc2fd2 not found: ID does not exist" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.239082 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a04fbeec-860c-4b22-b88d-087872b64e62" (UID: "a04fbeec-860c-4b22-b88d-087872b64e62"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.288416 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a04fbeec-860c-4b22-b88d-087872b64e62-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.427836 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.441279 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.472681 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.473492 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="rabbitmq" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.473519 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="rabbitmq" Feb 16 22:03:03 crc kubenswrapper[4792]: E0216 22:03:03.473581 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="setup-container" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.473594 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="setup-container" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.474076 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" containerName="rabbitmq" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.476433 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.485679 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.601752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602069 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602131 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602163 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ba92392-a8a9-40c9-9b0a-d35179a63c16-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602181 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602270 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ba92392-a8a9-40c9-9b0a-d35179a63c16-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2s7v\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-kube-api-access-w2s7v\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602392 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.602431 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704077 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704198 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704243 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ba92392-a8a9-40c9-9b0a-d35179a63c16-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704273 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704448 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ba92392-a8a9-40c9-9b0a-d35179a63c16-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704608 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2s7v\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-kube-api-access-w2s7v\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704653 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.704859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.705039 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.706519 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.706963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.707247 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ba92392-a8a9-40c9-9b0a-d35179a63c16-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.708471 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.708533 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20c7bc1850b81174e9caedc70a44c7496e9450066847b70ee49f2f7f9bc6c364/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.708668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.708912 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.726638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.729277 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ba92392-a8a9-40c9-9b0a-d35179a63c16-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.729564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ba92392-a8a9-40c9-9b0a-d35179a63c16-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.732402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2s7v\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-kube-api-access-w2s7v\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.733153 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ba92392-a8a9-40c9-9b0a-d35179a63c16-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:03 crc kubenswrapper[4792]: I0216 22:03:03.811184 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb58f0a4-d9e5-4066-b838-b3a1b8ffc66e\") pod \"rabbitmq-server-2\" (UID: \"8ba92392-a8a9-40c9-9b0a-d35179a63c16\") " pod="openstack/rabbitmq-server-2" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.042575 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04fbeec-860c-4b22-b88d-087872b64e62" path="/var/lib/kubelet/pods/a04fbeec-860c-4b22-b88d-087872b64e62/volumes" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.078101 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.104009 4792 generic.go:334] "Generic (PLEG): container finished" podID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerID="a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b" exitCode=0 Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.104652 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerDied","Data":"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b"} Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.104784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"659cd2b3-5d9d-4992-acf8-385acdbbc443","Type":"ContainerDied","Data":"25b6ea188d50072778dee7cf23785d88b26c3075ed7619470b2781e2036e6a7d"} Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.104876 4792 scope.go:117] "RemoveContainer" containerID="a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.104974 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.105504 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115165 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115261 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115298 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vnwv\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.115452 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.116204 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.116300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.116379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.116434 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.116491 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info\") pod \"659cd2b3-5d9d-4992-acf8-385acdbbc443\" (UID: \"659cd2b3-5d9d-4992-acf8-385acdbbc443\") " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.124409 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.124412 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.127244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info" (OuterVolumeSpecName: "pod-info") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.131042 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.133517 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.134097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv" (OuterVolumeSpecName: "kube-api-access-9vnwv") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "kube-api-access-9vnwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.135108 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.179116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data" (OuterVolumeSpecName: "config-data") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.205788 4792 scope.go:117] "RemoveContainer" containerID="1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.220724 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229039 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/659cd2b3-5d9d-4992-acf8-385acdbbc443-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229238 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229321 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vnwv\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-kube-api-access-9vnwv\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229431 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229510 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/659cd2b3-5d9d-4992-acf8-385acdbbc443-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229651 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.229749 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.289327 4792 scope.go:117] "RemoveContainer" containerID="a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b" Feb 16 22:03:04 crc kubenswrapper[4792]: E0216 22:03:04.299962 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b\": container with ID starting with a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b not found: ID does not exist" containerID="a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.300001 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b"} err="failed to get container status \"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b\": rpc error: code = NotFound desc = could not find container \"a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b\": container with ID starting with a8997c0dfb5a1a9468d49bc9f832252cf6487699b1694123dd9e6d02f36cbc1b not found: ID does not exist" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.300025 4792 scope.go:117] "RemoveContainer" containerID="1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228" Feb 16 22:03:04 crc kubenswrapper[4792]: E0216 22:03:04.318307 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228\": container with ID starting with 1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228 not found: ID does not exist" containerID="1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.318347 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228"} err="failed to get container status \"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228\": rpc error: code = NotFound desc = could not find container \"1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228\": container with ID starting with 1e294fbc0d92ea50d92dcce70fd58270511c21018cd3973756816f827688a228 not found: ID does not exist" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.321220 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf" (OuterVolumeSpecName: "server-conf") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.333915 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/659cd2b3-5d9d-4992-acf8-385acdbbc443-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.355747 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66" (OuterVolumeSpecName: "persistence") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.437329 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") on node \"crc\" " Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.471633 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "659cd2b3-5d9d-4992-acf8-385acdbbc443" (UID: "659cd2b3-5d9d-4992-acf8-385acdbbc443"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.473848 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.474030 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66") on node "crc" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.542484 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/659cd2b3-5d9d-4992-acf8-385acdbbc443-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.542517 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.757369 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.775338 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.792279 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:03:04 crc kubenswrapper[4792]: E0216 22:03:04.792918 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="rabbitmq" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.792942 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="rabbitmq" Feb 16 22:03:04 crc kubenswrapper[4792]: E0216 22:03:04.792967 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="setup-container" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.792976 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="setup-container" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.793245 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" containerName="rabbitmq" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.794910 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.798042 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.798252 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.798415 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k5hbt" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.798620 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.798731 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.799069 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.799198 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.810927 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.822881 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.956698 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40456664-5897-4d32-b9de-d0d48a06764d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957304 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957397 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957708 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957863 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40456664-5897-4d32-b9de-d0d48a06764d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.957958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbq2f\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-kube-api-access-rbq2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.958044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:04 crc kubenswrapper[4792]: I0216 22:03:04.958146 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40456664-5897-4d32-b9de-d0d48a06764d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060754 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40456664-5897-4d32-b9de-d0d48a06764d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.060980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbq2f\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-kube-api-access-rbq2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.061002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.061051 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.061129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.062964 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.062975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.063662 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.063796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.063990 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40456664-5897-4d32-b9de-d0d48a06764d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.066395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40456664-5897-4d32-b9de-d0d48a06764d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.066590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40456664-5897-4d32-b9de-d0d48a06764d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.067225 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.067348 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/408ec3f2e4754699964c8e323d7cd2d28ec9bc48e0167cd7e040036a16df5c2f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.069187 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.073462 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.082213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbq2f\" (UniqueName: \"kubernetes.io/projected/40456664-5897-4d32-b9de-d0d48a06764d-kube-api-access-rbq2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.119250 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ba92392-a8a9-40c9-9b0a-d35179a63c16","Type":"ContainerStarted","Data":"3782f46ecc1edd8b125ce2fe5a24e89dfbcbe7f322b1332520b9074bff1951ce"} Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.139867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e3cb41c-09a4-45d2-9a99-b761125e8a66\") pod \"rabbitmq-cell1-server-0\" (UID: \"40456664-5897-4d32-b9de-d0d48a06764d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.308498 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.645231 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.650960 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.653663 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.704646 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.782977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783297 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783418 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783472 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783578 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr5l\" (UniqueName: \"kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.783626 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: W0216 22:03:05.784238 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40456664_5897_4d32_b9de_d0d48a06764d.slice/crio-ba33d633994590a34988d9a21fba812acf315f73f46086f58e3599ae87d15da6 WatchSource:0}: Error finding container ba33d633994590a34988d9a21fba812acf315f73f46086f58e3599ae87d15da6: Status 404 returned error can't find the container with id ba33d633994590a34988d9a21fba812acf315f73f46086f58e3599ae87d15da6 Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.785483 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885226 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885278 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnr5l\" (UniqueName: \"kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.885406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886016 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886629 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.886645 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:05 crc kubenswrapper[4792]: I0216 22:03:05.904170 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnr5l\" (UniqueName: \"kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l\") pod \"dnsmasq-dns-594cb89c79-zw66w\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:06 crc kubenswrapper[4792]: I0216 22:03:06.004847 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:06 crc kubenswrapper[4792]: I0216 22:03:06.041497 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659cd2b3-5d9d-4992-acf8-385acdbbc443" path="/var/lib/kubelet/pods/659cd2b3-5d9d-4992-acf8-385acdbbc443/volumes" Feb 16 22:03:06 crc kubenswrapper[4792]: I0216 22:03:06.152518 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40456664-5897-4d32-b9de-d0d48a06764d","Type":"ContainerStarted","Data":"ba33d633994590a34988d9a21fba812acf315f73f46086f58e3599ae87d15da6"} Feb 16 22:03:06 crc kubenswrapper[4792]: W0216 22:03:06.479986 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ab7ff28_f268_4aa6_abea_dedc54294f2d.slice/crio-e4baad25254c589b6a4f9b1033c53040fc66e7fe8008cd93036fdc495d41e4db WatchSource:0}: Error finding container e4baad25254c589b6a4f9b1033c53040fc66e7fe8008cd93036fdc495d41e4db: Status 404 returned error can't find the container with id e4baad25254c589b6a4f9b1033c53040fc66e7fe8008cd93036fdc495d41e4db Feb 16 22:03:06 crc kubenswrapper[4792]: I0216 22:03:06.488159 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:07 crc kubenswrapper[4792]: I0216 22:03:07.165664 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerID="1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8" exitCode=0 Feb 16 22:03:07 crc kubenswrapper[4792]: I0216 22:03:07.165706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" event={"ID":"2ab7ff28-f268-4aa6-abea-dedc54294f2d","Type":"ContainerDied","Data":"1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8"} Feb 16 22:03:07 crc kubenswrapper[4792]: I0216 22:03:07.166069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" event={"ID":"2ab7ff28-f268-4aa6-abea-dedc54294f2d","Type":"ContainerStarted","Data":"e4baad25254c589b6a4f9b1033c53040fc66e7fe8008cd93036fdc495d41e4db"} Feb 16 22:03:08 crc kubenswrapper[4792]: I0216 22:03:08.221424 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40456664-5897-4d32-b9de-d0d48a06764d","Type":"ContainerStarted","Data":"744e8dc1f9e8ea161f333d0a65008114717aee9e3d52f04321c6f90527c2261a"} Feb 16 22:03:08 crc kubenswrapper[4792]: I0216 22:03:08.227545 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" event={"ID":"2ab7ff28-f268-4aa6-abea-dedc54294f2d","Type":"ContainerStarted","Data":"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f"} Feb 16 22:03:08 crc kubenswrapper[4792]: I0216 22:03:08.227775 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:08 crc kubenswrapper[4792]: I0216 22:03:08.281943 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" podStartSLOduration=3.281922041 podStartE2EDuration="3.281922041s" podCreationTimestamp="2026-02-16 22:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:03:08.275896577 +0000 UTC m=+1520.929175468" watchObservedRunningTime="2026-02-16 22:03:08.281922041 +0000 UTC m=+1520.935200932" Feb 16 22:03:09 crc kubenswrapper[4792]: I0216 22:03:09.238613 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ba92392-a8a9-40c9-9b0a-d35179a63c16","Type":"ContainerStarted","Data":"67414ce59bd8c60914d888288d8710188d8d5f923161df60afdfe0649580388f"} Feb 16 22:03:13 crc kubenswrapper[4792]: I0216 22:03:13.041866 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 22:03:13 crc kubenswrapper[4792]: E0216 22:03:13.159167 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:03:13 crc kubenswrapper[4792]: E0216 22:03:13.159650 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:03:13 crc kubenswrapper[4792]: E0216 22:03:13.159777 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:03:13 crc kubenswrapper[4792]: E0216 22:03:13.160983 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:13 crc kubenswrapper[4792]: E0216 22:03:13.312891 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.006950 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:16 crc kubenswrapper[4792]: E0216 22:03:16.030761 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.135867 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.136161 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="dnsmasq-dns" containerID="cri-o://929df6369f20e845c3e9fc24590d951318fae1da90013e2d73ce93f9eaa6f02d" gracePeriod=10 Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.319418 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-5jl4c"] Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.322176 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.352718 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-5jl4c"] Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.377464 4792 generic.go:334] "Generic (PLEG): container finished" podID="161250cf-19fe-49b8-bb81-4946c8b56056" containerID="929df6369f20e845c3e9fc24590d951318fae1da90013e2d73ce93f9eaa6f02d" exitCode=0 Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.378226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" event={"ID":"161250cf-19fe-49b8-bb81-4946c8b56056","Type":"ContainerDied","Data":"929df6369f20e845c3e9fc24590d951318fae1da90013e2d73ce93f9eaa6f02d"} Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472114 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59mx\" (UniqueName: \"kubernetes.io/projected/1e5abd0c-4ca2-460c-a47f-a057371692d2-kube-api-access-l59mx\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.472529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-config\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574098 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-config\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574333 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59mx\" (UniqueName: \"kubernetes.io/projected/1e5abd0c-4ca2-460c-a47f-a057371692d2-kube-api-access-l59mx\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.574438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.575355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.575382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-config\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.575401 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.575482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.575569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.576016 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1e5abd0c-4ca2-460c-a47f-a057371692d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.603834 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59mx\" (UniqueName: \"kubernetes.io/projected/1e5abd0c-4ca2-460c-a47f-a057371692d2-kube-api-access-l59mx\") pod \"dnsmasq-dns-5596c69fcc-5jl4c\" (UID: \"1e5abd0c-4ca2-460c-a47f-a057371692d2\") " pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.661520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:16 crc kubenswrapper[4792]: I0216 22:03:16.985489 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.086914 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.086985 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.087113 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn8xh\" (UniqueName: \"kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.087361 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.087415 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.087479 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0\") pod \"161250cf-19fe-49b8-bb81-4946c8b56056\" (UID: \"161250cf-19fe-49b8-bb81-4946c8b56056\") " Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.109293 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh" (OuterVolumeSpecName: "kube-api-access-qn8xh") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "kube-api-access-qn8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.174549 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.181137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.181182 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config" (OuterVolumeSpecName: "config") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.191918 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.191954 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.191965 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.191974 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn8xh\" (UniqueName: \"kubernetes.io/projected/161250cf-19fe-49b8-bb81-4946c8b56056-kube-api-access-qn8xh\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.199315 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.200217 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "161250cf-19fe-49b8-bb81-4946c8b56056" (UID: "161250cf-19fe-49b8-bb81-4946c8b56056"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.293886 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.293921 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/161250cf-19fe-49b8-bb81-4946c8b56056-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:17 crc kubenswrapper[4792]: W0216 22:03:17.297122 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5abd0c_4ca2_460c_a47f_a057371692d2.slice/crio-ddb2691929b1b95c6fc94345fb4cf4b856c01af10deeddae03e3eacb936d41d8 WatchSource:0}: Error finding container ddb2691929b1b95c6fc94345fb4cf4b856c01af10deeddae03e3eacb936d41d8: Status 404 returned error can't find the container with id ddb2691929b1b95c6fc94345fb4cf4b856c01af10deeddae03e3eacb936d41d8 Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.299342 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-5jl4c"] Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.393180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" event={"ID":"1e5abd0c-4ca2-460c-a47f-a057371692d2","Type":"ContainerStarted","Data":"ddb2691929b1b95c6fc94345fb4cf4b856c01af10deeddae03e3eacb936d41d8"} Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.396932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" event={"ID":"161250cf-19fe-49b8-bb81-4946c8b56056","Type":"ContainerDied","Data":"97f99d9081f1498ecbbb884347e611fd74db69bfeaf508cab2b356d882875e84"} Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.396986 4792 scope.go:117] "RemoveContainer" containerID="929df6369f20e845c3e9fc24590d951318fae1da90013e2d73ce93f9eaa6f02d" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.397098 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-hbdhp" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.450392 4792 scope.go:117] "RemoveContainer" containerID="9ae0bcdb1fbbd37d79bd6430b2f8d3ca0ae50523bcdbfdbbe8ea0e7e2bb8f63d" Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.481350 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:03:17 crc kubenswrapper[4792]: I0216 22:03:17.492960 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-hbdhp"] Feb 16 22:03:18 crc kubenswrapper[4792]: I0216 22:03:18.043870 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" path="/var/lib/kubelet/pods/161250cf-19fe-49b8-bb81-4946c8b56056/volumes" Feb 16 22:03:18 crc kubenswrapper[4792]: I0216 22:03:18.413476 4792 generic.go:334] "Generic (PLEG): container finished" podID="1e5abd0c-4ca2-460c-a47f-a057371692d2" containerID="d625444d81beefeebd73ad627507504ae1c66656fe16a80f3e5196f505580a47" exitCode=0 Feb 16 22:03:18 crc kubenswrapper[4792]: I0216 22:03:18.413543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" event={"ID":"1e5abd0c-4ca2-460c-a47f-a057371692d2","Type":"ContainerDied","Data":"d625444d81beefeebd73ad627507504ae1c66656fe16a80f3e5196f505580a47"} Feb 16 22:03:19 crc kubenswrapper[4792]: I0216 22:03:19.431656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" event={"ID":"1e5abd0c-4ca2-460c-a47f-a057371692d2","Type":"ContainerStarted","Data":"a5747592a60c3c589c255a22eed85d53261847539a38fd498e245bac913e491a"} Feb 16 22:03:19 crc kubenswrapper[4792]: I0216 22:03:19.431980 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:19 crc kubenswrapper[4792]: I0216 22:03:19.472331 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" podStartSLOduration=3.472313302 podStartE2EDuration="3.472313302s" podCreationTimestamp="2026-02-16 22:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:03:19.467087391 +0000 UTC m=+1532.120366322" watchObservedRunningTime="2026-02-16 22:03:19.472313302 +0000 UTC m=+1532.125592193" Feb 16 22:03:26 crc kubenswrapper[4792]: E0216 22:03:26.033817 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:26 crc kubenswrapper[4792]: I0216 22:03:26.664127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-5jl4c" Feb 16 22:03:26 crc kubenswrapper[4792]: I0216 22:03:26.744287 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:26 crc kubenswrapper[4792]: I0216 22:03:26.744507 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="dnsmasq-dns" containerID="cri-o://de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f" gracePeriod=10 Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.158108 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.158415 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.158546 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.160721 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.409042 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.471733 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.471814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.471886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.471945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.472044 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.472102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnr5l\" (UniqueName: \"kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.472253 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config\") pod \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\" (UID: \"2ab7ff28-f268-4aa6-abea-dedc54294f2d\") " Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.480332 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l" (OuterVolumeSpecName: "kube-api-access-bnr5l") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "kube-api-access-bnr5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.540740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.541150 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.547137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.553789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.569364 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config" (OuterVolumeSpecName: "config") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.572499 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ab7ff28-f268-4aa6-abea-dedc54294f2d" (UID: "2ab7ff28-f268-4aa6-abea-dedc54294f2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574836 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnr5l\" (UniqueName: \"kubernetes.io/projected/2ab7ff28-f268-4aa6-abea-dedc54294f2d-kube-api-access-bnr5l\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574861 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574870 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574878 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574888 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574896 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.574905 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab7ff28-f268-4aa6-abea-dedc54294f2d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.703183 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerID="de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f" exitCode=0 Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.703223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" event={"ID":"2ab7ff28-f268-4aa6-abea-dedc54294f2d","Type":"ContainerDied","Data":"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f"} Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.703253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" event={"ID":"2ab7ff28-f268-4aa6-abea-dedc54294f2d","Type":"ContainerDied","Data":"e4baad25254c589b6a4f9b1033c53040fc66e7fe8008cd93036fdc495d41e4db"} Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.703270 4792 scope.go:117] "RemoveContainer" containerID="de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.703276 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-zw66w" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.743772 4792 scope.go:117] "RemoveContainer" containerID="1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.745276 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.756267 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-zw66w"] Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.780377 4792 scope.go:117] "RemoveContainer" containerID="de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f" Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.780783 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f\": container with ID starting with de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f not found: ID does not exist" containerID="de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.780814 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f"} err="failed to get container status \"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f\": rpc error: code = NotFound desc = could not find container \"de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f\": container with ID starting with de5ac59c0173e5fefb45fc8d15c73dbbf946b44715c73d7cd2406620a241732f not found: ID does not exist" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.780837 4792 scope.go:117] "RemoveContainer" containerID="1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8" Feb 16 22:03:27 crc kubenswrapper[4792]: E0216 22:03:27.785031 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8\": container with ID starting with 1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8 not found: ID does not exist" containerID="1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8" Feb 16 22:03:27 crc kubenswrapper[4792]: I0216 22:03:27.785172 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8"} err="failed to get container status \"1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8\": rpc error: code = NotFound desc = could not find container \"1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8\": container with ID starting with 1790eb5b122da74a2876e5a3c86cc298aaab590955bff3f36b41535e5eced7c8 not found: ID does not exist" Feb 16 22:03:28 crc kubenswrapper[4792]: I0216 22:03:28.039090 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" path="/var/lib/kubelet/pods/2ab7ff28-f268-4aa6-abea-dedc54294f2d/volumes" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.532126 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.532711 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.532754 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.533894 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.533953 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" gracePeriod=600 Feb 16 22:03:31 crc kubenswrapper[4792]: E0216 22:03:31.659614 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.752504 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" exitCode=0 Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.752548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab"} Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.752606 4792 scope.go:117] "RemoveContainer" containerID="c6b0d4d9e89caed1f38ef6d4d43202d82036618edcd0b96ba5b894227261bcc4" Feb 16 22:03:31 crc kubenswrapper[4792]: I0216 22:03:31.753280 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:03:31 crc kubenswrapper[4792]: E0216 22:03:31.753549 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.694811 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:35 crc kubenswrapper[4792]: E0216 22:03:35.696060 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696080 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: E0216 22:03:35.696098 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="init" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696105 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="init" Feb 16 22:03:35 crc kubenswrapper[4792]: E0216 22:03:35.696122 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="init" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696129 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="init" Feb 16 22:03:35 crc kubenswrapper[4792]: E0216 22:03:35.696163 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696170 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696474 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab7ff28-f268-4aa6-abea-dedc54294f2d" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.696506 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="161250cf-19fe-49b8-bb81-4946c8b56056" containerName="dnsmasq-dns" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.698621 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.744662 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.773054 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.773810 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.773857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rw8\" (UniqueName: \"kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.875918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.875985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rw8\" (UniqueName: \"kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.876103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.876811 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.876830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:35 crc kubenswrapper[4792]: I0216 22:03:35.897253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rw8\" (UniqueName: \"kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8\") pod \"redhat-marketplace-mr8l5\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:36 crc kubenswrapper[4792]: I0216 22:03:36.032893 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:36 crc kubenswrapper[4792]: I0216 22:03:36.548515 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:36 crc kubenswrapper[4792]: I0216 22:03:36.813614 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerStarted","Data":"aabb6436b485f8ff3c4fac293c4c3e13b5780015e39e8e5f0cff2a32113cf398"} Feb 16 22:03:37 crc kubenswrapper[4792]: I0216 22:03:37.827376 4792 generic.go:334] "Generic (PLEG): container finished" podID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerID="4e46e6b120f51fa22af67844ee34d34498799566c2c719e5e63d2a851dc6b88c" exitCode=0 Feb 16 22:03:37 crc kubenswrapper[4792]: I0216 22:03:37.827460 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerDied","Data":"4e46e6b120f51fa22af67844ee34d34498799566c2c719e5e63d2a851dc6b88c"} Feb 16 22:03:38 crc kubenswrapper[4792]: I0216 22:03:38.843591 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerStarted","Data":"2a5d1a3a0992b6812ec7fcb56af810824d586826383e9ae255478ba2f7c0ce7c"} Feb 16 22:03:39 crc kubenswrapper[4792]: E0216 22:03:39.162990 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:03:39 crc kubenswrapper[4792]: E0216 22:03:39.163323 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:03:39 crc kubenswrapper[4792]: E0216 22:03:39.163456 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:03:39 crc kubenswrapper[4792]: E0216 22:03:39.164673 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:39 crc kubenswrapper[4792]: I0216 22:03:39.874022 4792 generic.go:334] "Generic (PLEG): container finished" podID="40456664-5897-4d32-b9de-d0d48a06764d" containerID="744e8dc1f9e8ea161f333d0a65008114717aee9e3d52f04321c6f90527c2261a" exitCode=0 Feb 16 22:03:39 crc kubenswrapper[4792]: I0216 22:03:39.874086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40456664-5897-4d32-b9de-d0d48a06764d","Type":"ContainerDied","Data":"744e8dc1f9e8ea161f333d0a65008114717aee9e3d52f04321c6f90527c2261a"} Feb 16 22:03:39 crc kubenswrapper[4792]: I0216 22:03:39.877198 4792 generic.go:334] "Generic (PLEG): container finished" podID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerID="2a5d1a3a0992b6812ec7fcb56af810824d586826383e9ae255478ba2f7c0ce7c" exitCode=0 Feb 16 22:03:39 crc kubenswrapper[4792]: I0216 22:03:39.877794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerDied","Data":"2a5d1a3a0992b6812ec7fcb56af810824d586826383e9ae255478ba2f7c0ce7c"} Feb 16 22:03:40 crc kubenswrapper[4792]: E0216 22:03:40.029190 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.891240 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerStarted","Data":"6956f8091479309baacdfe84398d65f11dc99c31ccf34231e1dde5e3bbc3c65c"} Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.895075 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"40456664-5897-4d32-b9de-d0d48a06764d","Type":"ContainerStarted","Data":"4e3e804acd84eb0729f1d0fdd4d4ace0733432a1e4d8abb74d1fe423eefc4181"} Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.895753 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.897045 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ba92392-a8a9-40c9-9b0a-d35179a63c16" containerID="67414ce59bd8c60914d888288d8710188d8d5f923161df60afdfe0649580388f" exitCode=0 Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.897070 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ba92392-a8a9-40c9-9b0a-d35179a63c16","Type":"ContainerDied","Data":"67414ce59bd8c60914d888288d8710188d8d5f923161df60afdfe0649580388f"} Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.941067 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mr8l5" podStartSLOduration=3.488328922 podStartE2EDuration="5.941049895s" podCreationTimestamp="2026-02-16 22:03:35 +0000 UTC" firstStartedPulling="2026-02-16 22:03:37.829508181 +0000 UTC m=+1550.482787072" lastFinishedPulling="2026-02-16 22:03:40.282229164 +0000 UTC m=+1552.935508045" observedRunningTime="2026-02-16 22:03:40.930775135 +0000 UTC m=+1553.584054026" watchObservedRunningTime="2026-02-16 22:03:40.941049895 +0000 UTC m=+1553.594328786" Feb 16 22:03:40 crc kubenswrapper[4792]: I0216 22:03:40.992691 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.992665997 podStartE2EDuration="36.992665997s" podCreationTimestamp="2026-02-16 22:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:03:40.957862532 +0000 UTC m=+1553.611141433" watchObservedRunningTime="2026-02-16 22:03:40.992665997 +0000 UTC m=+1553.645944888" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.001200 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s"] Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.003009 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.013013 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.013230 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.013553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.013725 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.053686 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s"] Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.122340 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.122530 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.122638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlnz\" (UniqueName: \"kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.122775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.224912 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.224997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.225067 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlnz\" (UniqueName: \"kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.225121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.230928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.231486 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.243123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.251786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlnz\" (UniqueName: \"kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.443131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.922367 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ba92392-a8a9-40c9-9b0a-d35179a63c16","Type":"ContainerStarted","Data":"60c6cddc47e504de05f2f272d96d5458ebb03b5f4b86670a1f3b04a2e5e17bb5"} Feb 16 22:03:41 crc kubenswrapper[4792]: I0216 22:03:41.952125 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.952103346 podStartE2EDuration="38.952103346s" podCreationTimestamp="2026-02-16 22:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:03:41.947173452 +0000 UTC m=+1554.600452343" watchObservedRunningTime="2026-02-16 22:03:41.952103346 +0000 UTC m=+1554.605382237" Feb 16 22:03:42 crc kubenswrapper[4792]: I0216 22:03:42.366797 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s"] Feb 16 22:03:42 crc kubenswrapper[4792]: I0216 22:03:42.933630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" event={"ID":"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb","Type":"ContainerStarted","Data":"c8ed77f1919b2925fe83f1d568f21e73060674191fc9ee4da730e624411dc3af"} Feb 16 22:03:44 crc kubenswrapper[4792]: I0216 22:03:44.106546 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.344646 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.351075 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.362359 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.431761 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.431830 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgg7z\" (UniqueName: \"kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.431927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.534359 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.534425 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgg7z\" (UniqueName: \"kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.534502 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.535035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.535344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.563395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgg7z\" (UniqueName: \"kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z\") pod \"certified-operators-bmcf6\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:45 crc kubenswrapper[4792]: I0216 22:03:45.708500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:03:46 crc kubenswrapper[4792]: I0216 22:03:46.119414 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:46 crc kubenswrapper[4792]: I0216 22:03:46.120440 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:46 crc kubenswrapper[4792]: I0216 22:03:46.145653 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:46 crc kubenswrapper[4792]: I0216 22:03:46.429354 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:03:47 crc kubenswrapper[4792]: I0216 22:03:47.027223 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:03:47 crc kubenswrapper[4792]: E0216 22:03:47.027954 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:03:47 crc kubenswrapper[4792]: I0216 22:03:47.043839 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:48 crc kubenswrapper[4792]: I0216 22:03:48.512000 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:49 crc kubenswrapper[4792]: I0216 22:03:49.014548 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mr8l5" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="registry-server" containerID="cri-o://6956f8091479309baacdfe84398d65f11dc99c31ccf34231e1dde5e3bbc3c65c" gracePeriod=2 Feb 16 22:03:50 crc kubenswrapper[4792]: I0216 22:03:50.034480 4792 generic.go:334] "Generic (PLEG): container finished" podID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerID="6956f8091479309baacdfe84398d65f11dc99c31ccf34231e1dde5e3bbc3c65c" exitCode=0 Feb 16 22:03:50 crc kubenswrapper[4792]: I0216 22:03:50.046398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerDied","Data":"6956f8091479309baacdfe84398d65f11dc99c31ccf34231e1dde5e3bbc3c65c"} Feb 16 22:03:51 crc kubenswrapper[4792]: E0216 22:03:51.030647 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:03:51 crc kubenswrapper[4792]: E0216 22:03:51.030894 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:03:53 crc kubenswrapper[4792]: W0216 22:03:53.081963 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3087ac68_6c5a_47f3_9cbe_c0cd404cbf78.slice/crio-3757e791a6b5e86778cbd558ecf230f5a73830db46cba85cd8c33e5ebd206fda WatchSource:0}: Error finding container 3757e791a6b5e86778cbd558ecf230f5a73830db46cba85cd8c33e5ebd206fda: Status 404 returned error can't find the container with id 3757e791a6b5e86778cbd558ecf230f5a73830db46cba85cd8c33e5ebd206fda Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.465734 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.681086 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.782344 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5rw8\" (UniqueName: \"kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8\") pod \"bbf502df-b96d-411c-8010-55e1e2a817f0\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.782611 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities\") pod \"bbf502df-b96d-411c-8010-55e1e2a817f0\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.782629 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content\") pod \"bbf502df-b96d-411c-8010-55e1e2a817f0\" (UID: \"bbf502df-b96d-411c-8010-55e1e2a817f0\") " Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.783070 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities" (OuterVolumeSpecName: "utilities") pod "bbf502df-b96d-411c-8010-55e1e2a817f0" (UID: "bbf502df-b96d-411c-8010-55e1e2a817f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.783358 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.789432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8" (OuterVolumeSpecName: "kube-api-access-h5rw8") pod "bbf502df-b96d-411c-8010-55e1e2a817f0" (UID: "bbf502df-b96d-411c-8010-55e1e2a817f0"). InnerVolumeSpecName "kube-api-access-h5rw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.809209 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbf502df-b96d-411c-8010-55e1e2a817f0" (UID: "bbf502df-b96d-411c-8010-55e1e2a817f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.886201 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5rw8\" (UniqueName: \"kubernetes.io/projected/bbf502df-b96d-411c-8010-55e1e2a817f0-kube-api-access-h5rw8\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:53 crc kubenswrapper[4792]: I0216 22:03:53.886257 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbf502df-b96d-411c-8010-55e1e2a817f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.098314 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" event={"ID":"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb","Type":"ContainerStarted","Data":"c1eaf63a79fef43427f3f5aa8690512f09fedbdb518672a969e2cfbe786db6d7"} Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.102225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr8l5" event={"ID":"bbf502df-b96d-411c-8010-55e1e2a817f0","Type":"ContainerDied","Data":"aabb6436b485f8ff3c4fac293c4c3e13b5780015e39e8e5f0cff2a32113cf398"} Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.102269 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr8l5" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.102279 4792 scope.go:117] "RemoveContainer" containerID="6956f8091479309baacdfe84398d65f11dc99c31ccf34231e1dde5e3bbc3c65c" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.105244 4792 generic.go:334] "Generic (PLEG): container finished" podID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerID="f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23" exitCode=0 Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.105282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerDied","Data":"f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23"} Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.105302 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerStarted","Data":"3757e791a6b5e86778cbd558ecf230f5a73830db46cba85cd8c33e5ebd206fda"} Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.109846 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.130473 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" podStartSLOduration=3.023639993 podStartE2EDuration="14.13040985s" podCreationTimestamp="2026-02-16 22:03:40 +0000 UTC" firstStartedPulling="2026-02-16 22:03:42.356252207 +0000 UTC m=+1555.009531098" lastFinishedPulling="2026-02-16 22:03:53.463022064 +0000 UTC m=+1566.116300955" observedRunningTime="2026-02-16 22:03:54.117836879 +0000 UTC m=+1566.771115780" watchObservedRunningTime="2026-02-16 22:03:54.13040985 +0000 UTC m=+1566.783688781" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.202217 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.204070 4792 scope.go:117] "RemoveContainer" containerID="2a5d1a3a0992b6812ec7fcb56af810824d586826383e9ae255478ba2f7c0ce7c" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.255223 4792 scope.go:117] "RemoveContainer" containerID="4e46e6b120f51fa22af67844ee34d34498799566c2c719e5e63d2a851dc6b88c" Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.259278 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:54 crc kubenswrapper[4792]: I0216 22:03:54.275991 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr8l5"] Feb 16 22:03:55 crc kubenswrapper[4792]: I0216 22:03:55.118469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerStarted","Data":"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da"} Feb 16 22:03:55 crc kubenswrapper[4792]: I0216 22:03:55.311803 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.041454 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" path="/var/lib/kubelet/pods/bbf502df-b96d-411c-8010-55e1e2a817f0/volumes" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.047128 4792 scope.go:117] "RemoveContainer" containerID="24e156cff974b6adba840e6304fa4d9473606ff354cf5a0d46936139e11b20bb" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.267396 4792 scope.go:117] "RemoveContainer" containerID="c874999e62700a5e133d4d3e676eda5a259fc1d73f1ee6fd3ebf6b12e843e528" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.320776 4792 scope.go:117] "RemoveContainer" containerID="24a450af07798fb54df8b438531aaf0da1b9411180deb04b2391292e7bd1515f" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.355750 4792 scope.go:117] "RemoveContainer" containerID="beaf8deeb319f83cd4751297f248c7459f696f1d31f766f3c293aa3fe9ee354b" Feb 16 22:03:56 crc kubenswrapper[4792]: I0216 22:03:56.419274 4792 scope.go:117] "RemoveContainer" containerID="700051e2b19821cea1bf4617d65e3ba1a67f335772a6b9e9a09a52d49d802dd2" Feb 16 22:03:57 crc kubenswrapper[4792]: I0216 22:03:57.142868 4792 generic.go:334] "Generic (PLEG): container finished" podID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerID="04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da" exitCode=0 Feb 16 22:03:57 crc kubenswrapper[4792]: I0216 22:03:57.142959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerDied","Data":"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da"} Feb 16 22:03:58 crc kubenswrapper[4792]: I0216 22:03:58.174714 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerStarted","Data":"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882"} Feb 16 22:03:58 crc kubenswrapper[4792]: I0216 22:03:58.208391 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmcf6" podStartSLOduration=9.813950884 podStartE2EDuration="13.208371583s" podCreationTimestamp="2026-02-16 22:03:45 +0000 UTC" firstStartedPulling="2026-02-16 22:03:54.10716483 +0000 UTC m=+1566.760443731" lastFinishedPulling="2026-02-16 22:03:57.501585539 +0000 UTC m=+1570.154864430" observedRunningTime="2026-02-16 22:03:58.203981984 +0000 UTC m=+1570.857260885" watchObservedRunningTime="2026-02-16 22:03:58.208371583 +0000 UTC m=+1570.861650474" Feb 16 22:03:59 crc kubenswrapper[4792]: I0216 22:03:59.311278 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 16 22:03:59 crc kubenswrapper[4792]: I0216 22:03:59.395908 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" containerID="cri-o://084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199" gracePeriod=604795 Feb 16 22:04:00 crc kubenswrapper[4792]: I0216 22:04:00.026431 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:04:00 crc kubenswrapper[4792]: E0216 22:04:00.027161 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:04:05 crc kubenswrapper[4792]: E0216 22:04:05.028770 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:04:05 crc kubenswrapper[4792]: I0216 22:04:05.709009 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:05 crc kubenswrapper[4792]: I0216 22:04:05.709323 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:05 crc kubenswrapper[4792]: I0216 22:04:05.769232 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.027451 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.111664 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.266797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.266881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.266939 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267085 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph9lj\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267181 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.267228 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.268624 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.268681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.268772 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.268824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf\") pod \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\" (UID: \"383a4dad-f6ec-491a-ab49-c2b2e1f4432a\") " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.269564 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.269597 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.269737 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.273568 4792 generic.go:334] "Generic (PLEG): container finished" podID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerID="084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199" exitCode=0 Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.273657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerDied","Data":"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199"} Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.273685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"383a4dad-f6ec-491a-ab49-c2b2e1f4432a","Type":"ContainerDied","Data":"941615da69a5130fa41d1c9d9762bc68d30a50200b0878a1780b98300add0963"} Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.273700 4792 scope.go:117] "RemoveContainer" containerID="084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.273802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.275860 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.278009 4792 generic.go:334] "Generic (PLEG): container finished" podID="c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" containerID="c1eaf63a79fef43427f3f5aa8690512f09fedbdb518672a969e2cfbe786db6d7" exitCode=0 Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.278916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" event={"ID":"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb","Type":"ContainerDied","Data":"c1eaf63a79fef43427f3f5aa8690512f09fedbdb518672a969e2cfbe786db6d7"} Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.280769 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info" (OuterVolumeSpecName: "pod-info") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.286039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj" (OuterVolumeSpecName: "kube-api-access-ph9lj") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "kube-api-access-ph9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.286451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.315075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data" (OuterVolumeSpecName: "config-data") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.338005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db" (OuterVolumeSpecName: "persistence") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.352148 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf" (OuterVolumeSpecName: "server-conf") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.363991 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372057 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372091 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph9lj\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-kube-api-access-ph9lj\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372104 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372117 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372145 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") on node \"crc\" " Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372159 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372172 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.372184 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.414805 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.414967 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db") on node "crc" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.437811 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "383a4dad-f6ec-491a-ab49-c2b2e1f4432a" (UID: "383a4dad-f6ec-491a-ab49-c2b2e1f4432a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.449763 4792 scope.go:117] "RemoveContainer" containerID="5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.473896 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/383a4dad-f6ec-491a-ab49-c2b2e1f4432a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.473926 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.474204 4792 scope.go:117] "RemoveContainer" containerID="084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.474740 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199\": container with ID starting with 084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199 not found: ID does not exist" containerID="084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.474780 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199"} err="failed to get container status \"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199\": rpc error: code = NotFound desc = could not find container \"084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199\": container with ID starting with 084cd9fc0a33b6dfb5fe8806a0869441c9750f897942d34f819e2da61fb50199 not found: ID does not exist" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.474805 4792 scope.go:117] "RemoveContainer" containerID="5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.475133 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a\": container with ID starting with 5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a not found: ID does not exist" containerID="5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.475167 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a"} err="failed to get container status \"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a\": rpc error: code = NotFound desc = could not find container \"5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a\": container with ID starting with 5cecf223ada67675f33e560fedbfc36a8fb44e310cde2180ac93d262b4d73f1a not found: ID does not exist" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.626843 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.636894 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.651519 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.652188 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="extract-content" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652216 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="extract-content" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.652235 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="extract-utilities" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652244 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="extract-utilities" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.652266 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="registry-server" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652274 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="registry-server" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.652302 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652310 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" Feb 16 22:04:06 crc kubenswrapper[4792]: E0216 22:04:06.652327 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="setup-container" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652334 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="setup-container" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652756 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf502df-b96d-411c-8010-55e1e2a817f0" containerName="registry-server" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.652787 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" containerName="rabbitmq" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.654379 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.671427 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.779848 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8sc7\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-kube-api-access-z8sc7\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.779932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.779967 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.779992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780042 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37d607c0-fb36-4635-9e83-4e07cd4906ff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37d607c0-fb36-4635-9e83-4e07cd4906ff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780106 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-config-data\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780171 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780198 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.780215 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882375 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882398 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882461 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8sc7\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-kube-api-access-z8sc7\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882523 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37d607c0-fb36-4635-9e83-4e07cd4906ff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37d607c0-fb36-4635-9e83-4e07cd4906ff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-config-data\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.882882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.883688 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.883757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-config-data\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.883871 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37d607c0-fb36-4635-9e83-4e07cd4906ff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.886068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.886294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.886386 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37d607c0-fb36-4635-9e83-4e07cd4906ff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.887534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.888805 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.890050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37d607c0-fb36-4635-9e83-4e07cd4906ff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.893466 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.902099 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.902165 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ada0102548212a6fc40a49ae1a277fc6184298bf2db5d525ba55233f2962106/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.902271 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8sc7\" (UniqueName: \"kubernetes.io/projected/37d607c0-fb36-4635-9e83-4e07cd4906ff-kube-api-access-z8sc7\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.966739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d10c0a2d-7287-4819-a1fe-1e24e7d523db\") pod \"rabbitmq-server-1\" (UID: \"37d607c0-fb36-4635-9e83-4e07cd4906ff\") " pod="openstack/rabbitmq-server-1" Feb 16 22:04:06 crc kubenswrapper[4792]: I0216 22:04:06.996678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 22:04:07 crc kubenswrapper[4792]: I0216 22:04:07.506439 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 22:04:07 crc kubenswrapper[4792]: I0216 22:04:07.880096 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.012789 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knlnz\" (UniqueName: \"kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz\") pod \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.012890 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory\") pod \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.013067 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam\") pod \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.013135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle\") pod \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\" (UID: \"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb\") " Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.017639 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz" (OuterVolumeSpecName: "kube-api-access-knlnz") pod "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" (UID: "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb"). InnerVolumeSpecName "kube-api-access-knlnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.043014 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383a4dad-f6ec-491a-ab49-c2b2e1f4432a" path="/var/lib/kubelet/pods/383a4dad-f6ec-491a-ab49-c2b2e1f4432a/volumes" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.116295 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" (UID: "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.116614 4792 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.116640 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knlnz\" (UniqueName: \"kubernetes.io/projected/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-kube-api-access-knlnz\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.135110 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" (UID: "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.142123 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory" (OuterVolumeSpecName: "inventory") pod "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" (UID: "c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.226178 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.226406 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.308988 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37d607c0-fb36-4635-9e83-4e07cd4906ff","Type":"ContainerStarted","Data":"f399a55b954533eb93476ad4543ff8447caa4fd6e5b54f460e654469a327f02a"} Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.311546 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.311543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s" event={"ID":"c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb","Type":"ContainerDied","Data":"c8ed77f1919b2925fe83f1d568f21e73060674191fc9ee4da730e624411dc3af"} Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.311590 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ed77f1919b2925fe83f1d568f21e73060674191fc9ee4da730e624411dc3af" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.311684 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bmcf6" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="registry-server" containerID="cri-o://07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882" gracePeriod=2 Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.388712 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7"] Feb 16 22:04:08 crc kubenswrapper[4792]: E0216 22:04:08.389251 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.389268 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.389488 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.390331 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.392922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.393087 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.393222 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.393341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.402327 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7"] Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.535426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.535514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.535640 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d28cw\" (UniqueName: \"kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.638332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d28cw\" (UniqueName: \"kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.638571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.638641 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.647290 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.657332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d28cw\" (UniqueName: \"kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.716020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-478c7\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.732569 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:08 crc kubenswrapper[4792]: I0216 22:04:08.978563 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.148669 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgg7z\" (UniqueName: \"kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z\") pod \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.148757 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content\") pod \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.148985 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities\") pod \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\" (UID: \"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78\") " Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.152058 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities" (OuterVolumeSpecName: "utilities") pod "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" (UID: "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.199818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" (UID: "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.252500 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.252553 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.320906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z" (OuterVolumeSpecName: "kube-api-access-bgg7z") pod "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" (UID: "3087ac68-6c5a-47f3-9cbe-c0cd404cbf78"). InnerVolumeSpecName "kube-api-access-bgg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.329118 4792 generic.go:334] "Generic (PLEG): container finished" podID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerID="07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882" exitCode=0 Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.329185 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmcf6" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.329211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerDied","Data":"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882"} Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.329613 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmcf6" event={"ID":"3087ac68-6c5a-47f3-9cbe-c0cd404cbf78","Type":"ContainerDied","Data":"3757e791a6b5e86778cbd558ecf230f5a73830db46cba85cd8c33e5ebd206fda"} Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.329636 4792 scope.go:117] "RemoveContainer" containerID="07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.354342 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgg7z\" (UniqueName: \"kubernetes.io/projected/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78-kube-api-access-bgg7z\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.422256 4792 scope.go:117] "RemoveContainer" containerID="04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da" Feb 16 22:04:09 crc kubenswrapper[4792]: W0216 22:04:09.433396 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a6b3ea_b10b_44b1_a26a_f9df8972529c.slice/crio-7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649 WatchSource:0}: Error finding container 7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649: Status 404 returned error can't find the container with id 7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649 Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.433446 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7"] Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.445400 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.456542 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bmcf6"] Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.496487 4792 scope.go:117] "RemoveContainer" containerID="f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.562410 4792 scope.go:117] "RemoveContainer" containerID="07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882" Feb 16 22:04:09 crc kubenswrapper[4792]: E0216 22:04:09.563469 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882\": container with ID starting with 07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882 not found: ID does not exist" containerID="07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.563511 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882"} err="failed to get container status \"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882\": rpc error: code = NotFound desc = could not find container \"07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882\": container with ID starting with 07c9627fd565b46204e42b567520234f55bfb10fa6cc2e047ef3bb5feaec6882 not found: ID does not exist" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.563536 4792 scope.go:117] "RemoveContainer" containerID="04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da" Feb 16 22:04:09 crc kubenswrapper[4792]: E0216 22:04:09.563856 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da\": container with ID starting with 04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da not found: ID does not exist" containerID="04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.563892 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da"} err="failed to get container status \"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da\": rpc error: code = NotFound desc = could not find container \"04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da\": container with ID starting with 04f449929dcc81b1c69e9e003f992624d8116695b84003dc1288996b817b92da not found: ID does not exist" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.563915 4792 scope.go:117] "RemoveContainer" containerID="f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23" Feb 16 22:04:09 crc kubenswrapper[4792]: E0216 22:04:09.564134 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23\": container with ID starting with f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23 not found: ID does not exist" containerID="f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23" Feb 16 22:04:09 crc kubenswrapper[4792]: I0216 22:04:09.564157 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23"} err="failed to get container status \"f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23\": rpc error: code = NotFound desc = could not find container \"f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23\": container with ID starting with f3d513adb66b299308c92f30ab40e8e3ea02cb887370edf03e2ecd4970ac6c23 not found: ID does not exist" Feb 16 22:04:10 crc kubenswrapper[4792]: I0216 22:04:10.038535 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" path="/var/lib/kubelet/pods/3087ac68-6c5a-47f3-9cbe-c0cd404cbf78/volumes" Feb 16 22:04:10 crc kubenswrapper[4792]: I0216 22:04:10.341679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" event={"ID":"c1a6b3ea-b10b-44b1-a26a-f9df8972529c","Type":"ContainerStarted","Data":"98437547aa44634225de9928effeb5262e2fb5d2c1db7fbb65174ef605f0e936"} Feb 16 22:04:10 crc kubenswrapper[4792]: I0216 22:04:10.341734 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" event={"ID":"c1a6b3ea-b10b-44b1-a26a-f9df8972529c","Type":"ContainerStarted","Data":"7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649"} Feb 16 22:04:10 crc kubenswrapper[4792]: I0216 22:04:10.344330 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37d607c0-fb36-4635-9e83-4e07cd4906ff","Type":"ContainerStarted","Data":"c10f1bf40bdeab4b1bb364e08c2eb4e8a2b8d0c0187ae21b42ff7de76807c94e"} Feb 16 22:04:10 crc kubenswrapper[4792]: I0216 22:04:10.361082 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" podStartSLOduration=1.901160232 podStartE2EDuration="2.361061993s" podCreationTimestamp="2026-02-16 22:04:08 +0000 UTC" firstStartedPulling="2026-02-16 22:04:09.436210165 +0000 UTC m=+1582.089489056" lastFinishedPulling="2026-02-16 22:04:09.896111906 +0000 UTC m=+1582.549390817" observedRunningTime="2026-02-16 22:04:10.356970022 +0000 UTC m=+1583.010248923" watchObservedRunningTime="2026-02-16 22:04:10.361061993 +0000 UTC m=+1583.014340884" Feb 16 22:04:13 crc kubenswrapper[4792]: I0216 22:04:13.026949 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:04:13 crc kubenswrapper[4792]: E0216 22:04:13.027805 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:04:13 crc kubenswrapper[4792]: I0216 22:04:13.385641 4792 generic.go:334] "Generic (PLEG): container finished" podID="c1a6b3ea-b10b-44b1-a26a-f9df8972529c" containerID="98437547aa44634225de9928effeb5262e2fb5d2c1db7fbb65174ef605f0e936" exitCode=0 Feb 16 22:04:13 crc kubenswrapper[4792]: I0216 22:04:13.385687 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" event={"ID":"c1a6b3ea-b10b-44b1-a26a-f9df8972529c","Type":"ContainerDied","Data":"98437547aa44634225de9928effeb5262e2fb5d2c1db7fbb65174ef605f0e936"} Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.061251 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.217487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d28cw\" (UniqueName: \"kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw\") pod \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.217539 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam\") pod \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.217728 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory\") pod \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\" (UID: \"c1a6b3ea-b10b-44b1-a26a-f9df8972529c\") " Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.224818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw" (OuterVolumeSpecName: "kube-api-access-d28cw") pod "c1a6b3ea-b10b-44b1-a26a-f9df8972529c" (UID: "c1a6b3ea-b10b-44b1-a26a-f9df8972529c"). InnerVolumeSpecName "kube-api-access-d28cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.254302 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory" (OuterVolumeSpecName: "inventory") pod "c1a6b3ea-b10b-44b1-a26a-f9df8972529c" (UID: "c1a6b3ea-b10b-44b1-a26a-f9df8972529c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.254579 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1a6b3ea-b10b-44b1-a26a-f9df8972529c" (UID: "c1a6b3ea-b10b-44b1-a26a-f9df8972529c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.321683 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d28cw\" (UniqueName: \"kubernetes.io/projected/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-kube-api-access-d28cw\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.322085 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.322106 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1a6b3ea-b10b-44b1-a26a-f9df8972529c-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.415507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" event={"ID":"c1a6b3ea-b10b-44b1-a26a-f9df8972529c","Type":"ContainerDied","Data":"7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649"} Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.415570 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7702b1b1880718945a64f8baeaf8c57dd8b756f85daf77bc0b0d63eaeb244649" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.415636 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-478c7" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.490523 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc"] Feb 16 22:04:15 crc kubenswrapper[4792]: E0216 22:04:15.491138 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="extract-content" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.497779 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="extract-content" Feb 16 22:04:15 crc kubenswrapper[4792]: E0216 22:04:15.497931 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a6b3ea-b10b-44b1-a26a-f9df8972529c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.497950 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a6b3ea-b10b-44b1-a26a-f9df8972529c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:15 crc kubenswrapper[4792]: E0216 22:04:15.497997 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="extract-utilities" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.498005 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="extract-utilities" Feb 16 22:04:15 crc kubenswrapper[4792]: E0216 22:04:15.498072 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="registry-server" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.498079 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="registry-server" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.498531 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a6b3ea-b10b-44b1-a26a-f9df8972529c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.498561 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3087ac68-6c5a-47f3-9cbe-c0cd404cbf78" containerName="registry-server" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.499510 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.502015 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.502705 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.503230 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.503400 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.508050 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc"] Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.630277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.630512 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.630669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.630715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4k6\" (UniqueName: \"kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.732887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.733022 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.733070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4k6\" (UniqueName: \"kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.733147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.738292 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.740213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.740990 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.752857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4k6\" (UniqueName: \"kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:15 crc kubenswrapper[4792]: I0216 22:04:15.830840 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:04:16 crc kubenswrapper[4792]: I0216 22:04:16.454376 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc"] Feb 16 22:04:17 crc kubenswrapper[4792]: E0216 22:04:17.029746 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:04:17 crc kubenswrapper[4792]: I0216 22:04:17.441389 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" event={"ID":"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8","Type":"ContainerStarted","Data":"3eaaa81a6fe5a1c50ad2754ae1c6683d085542cec2d917508c7c00a3d66a24ed"} Feb 16 22:04:17 crc kubenswrapper[4792]: I0216 22:04:17.441434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" event={"ID":"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8","Type":"ContainerStarted","Data":"113adc5b2556a6d3d60bc3f52e6a1708ff79fa2afa547db63e3851ae6ac1dab4"} Feb 16 22:04:17 crc kubenswrapper[4792]: I0216 22:04:17.462487 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" podStartSLOduration=2.046882138 podStartE2EDuration="2.462468386s" podCreationTimestamp="2026-02-16 22:04:15 +0000 UTC" firstStartedPulling="2026-02-16 22:04:16.457442225 +0000 UTC m=+1589.110721116" lastFinishedPulling="2026-02-16 22:04:16.873028483 +0000 UTC m=+1589.526307364" observedRunningTime="2026-02-16 22:04:17.456596316 +0000 UTC m=+1590.109875207" watchObservedRunningTime="2026-02-16 22:04:17.462468386 +0000 UTC m=+1590.115747267" Feb 16 22:04:19 crc kubenswrapper[4792]: E0216 22:04:19.170894 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:04:19 crc kubenswrapper[4792]: E0216 22:04:19.171436 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:04:19 crc kubenswrapper[4792]: E0216 22:04:19.171705 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:04:19 crc kubenswrapper[4792]: E0216 22:04:19.173018 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:04:24 crc kubenswrapper[4792]: I0216 22:04:24.028399 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:04:24 crc kubenswrapper[4792]: E0216 22:04:24.029362 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:04:31 crc kubenswrapper[4792]: E0216 22:04:31.030000 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:04:31 crc kubenswrapper[4792]: E0216 22:04:31.142708 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:04:31 crc kubenswrapper[4792]: E0216 22:04:31.142778 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:04:31 crc kubenswrapper[4792]: E0216 22:04:31.142911 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:04:31 crc kubenswrapper[4792]: E0216 22:04:31.143991 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.294729 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.300886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.313983 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.451438 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.451668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.451917 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rp4g\" (UniqueName: \"kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.553480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rp4g\" (UniqueName: \"kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.553648 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.553731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.554132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.554177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.574931 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rp4g\" (UniqueName: \"kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g\") pod \"community-operators-r4gx7\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:34 crc kubenswrapper[4792]: I0216 22:04:34.632693 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:35 crc kubenswrapper[4792]: I0216 22:04:35.196517 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:35 crc kubenswrapper[4792]: W0216 22:04:35.197686 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9896343c_5af3_4c19_84e2_acddddf062e3.slice/crio-b41e6e6751bfa3c2540202db01b51cee214d85ed0198fc2ab574690e43a14fe9 WatchSource:0}: Error finding container b41e6e6751bfa3c2540202db01b51cee214d85ed0198fc2ab574690e43a14fe9: Status 404 returned error can't find the container with id b41e6e6751bfa3c2540202db01b51cee214d85ed0198fc2ab574690e43a14fe9 Feb 16 22:04:35 crc kubenswrapper[4792]: I0216 22:04:35.665103 4792 generic.go:334] "Generic (PLEG): container finished" podID="9896343c-5af3-4c19-84e2-acddddf062e3" containerID="13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738" exitCode=0 Feb 16 22:04:35 crc kubenswrapper[4792]: I0216 22:04:35.665156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerDied","Data":"13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738"} Feb 16 22:04:35 crc kubenswrapper[4792]: I0216 22:04:35.665187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerStarted","Data":"b41e6e6751bfa3c2540202db01b51cee214d85ed0198fc2ab574690e43a14fe9"} Feb 16 22:04:36 crc kubenswrapper[4792]: I0216 22:04:36.026759 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:04:36 crc kubenswrapper[4792]: E0216 22:04:36.027251 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:04:36 crc kubenswrapper[4792]: I0216 22:04:36.677492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerStarted","Data":"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005"} Feb 16 22:04:38 crc kubenswrapper[4792]: I0216 22:04:38.709845 4792 generic.go:334] "Generic (PLEG): container finished" podID="9896343c-5af3-4c19-84e2-acddddf062e3" containerID="27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005" exitCode=0 Feb 16 22:04:38 crc kubenswrapper[4792]: I0216 22:04:38.709957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerDied","Data":"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005"} Feb 16 22:04:39 crc kubenswrapper[4792]: I0216 22:04:39.723171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerStarted","Data":"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb"} Feb 16 22:04:39 crc kubenswrapper[4792]: I0216 22:04:39.744171 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4gx7" podStartSLOduration=2.306836449 podStartE2EDuration="5.744150861s" podCreationTimestamp="2026-02-16 22:04:34 +0000 UTC" firstStartedPulling="2026-02-16 22:04:35.667899844 +0000 UTC m=+1608.321178735" lastFinishedPulling="2026-02-16 22:04:39.105214256 +0000 UTC m=+1611.758493147" observedRunningTime="2026-02-16 22:04:39.737031208 +0000 UTC m=+1612.390310119" watchObservedRunningTime="2026-02-16 22:04:39.744150861 +0000 UTC m=+1612.397429752" Feb 16 22:04:41 crc kubenswrapper[4792]: I0216 22:04:41.750720 4792 generic.go:334] "Generic (PLEG): container finished" podID="37d607c0-fb36-4635-9e83-4e07cd4906ff" containerID="c10f1bf40bdeab4b1bb364e08c2eb4e8a2b8d0c0187ae21b42ff7de76807c94e" exitCode=0 Feb 16 22:04:41 crc kubenswrapper[4792]: I0216 22:04:41.750843 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37d607c0-fb36-4635-9e83-4e07cd4906ff","Type":"ContainerDied","Data":"c10f1bf40bdeab4b1bb364e08c2eb4e8a2b8d0c0187ae21b42ff7de76807c94e"} Feb 16 22:04:42 crc kubenswrapper[4792]: I0216 22:04:42.764405 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37d607c0-fb36-4635-9e83-4e07cd4906ff","Type":"ContainerStarted","Data":"945687c27d76b17899d8d8126657d579b49cf30f9c471ce6ec84771c3fd5151f"} Feb 16 22:04:42 crc kubenswrapper[4792]: I0216 22:04:42.765769 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 22:04:42 crc kubenswrapper[4792]: I0216 22:04:42.792335 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.792319793 podStartE2EDuration="36.792319793s" podCreationTimestamp="2026-02-16 22:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:04:42.78487403 +0000 UTC m=+1615.438152921" watchObservedRunningTime="2026-02-16 22:04:42.792319793 +0000 UTC m=+1615.445598684" Feb 16 22:04:44 crc kubenswrapper[4792]: I0216 22:04:44.633585 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:44 crc kubenswrapper[4792]: I0216 22:04:44.634019 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:44 crc kubenswrapper[4792]: I0216 22:04:44.684884 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:44 crc kubenswrapper[4792]: I0216 22:04:44.856761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:44 crc kubenswrapper[4792]: I0216 22:04:44.919014 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:45 crc kubenswrapper[4792]: E0216 22:04:45.028660 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:04:46 crc kubenswrapper[4792]: E0216 22:04:46.027650 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:04:46 crc kubenswrapper[4792]: I0216 22:04:46.820499 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4gx7" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="registry-server" containerID="cri-o://3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb" gracePeriod=2 Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.421948 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.568497 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities\") pod \"9896343c-5af3-4c19-84e2-acddddf062e3\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.568624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content\") pod \"9896343c-5af3-4c19-84e2-acddddf062e3\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.568691 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rp4g\" (UniqueName: \"kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g\") pod \"9896343c-5af3-4c19-84e2-acddddf062e3\" (UID: \"9896343c-5af3-4c19-84e2-acddddf062e3\") " Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.569313 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities" (OuterVolumeSpecName: "utilities") pod "9896343c-5af3-4c19-84e2-acddddf062e3" (UID: "9896343c-5af3-4c19-84e2-acddddf062e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.579955 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g" (OuterVolumeSpecName: "kube-api-access-8rp4g") pod "9896343c-5af3-4c19-84e2-acddddf062e3" (UID: "9896343c-5af3-4c19-84e2-acddddf062e3"). InnerVolumeSpecName "kube-api-access-8rp4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.630259 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9896343c-5af3-4c19-84e2-acddddf062e3" (UID: "9896343c-5af3-4c19-84e2-acddddf062e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.671427 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.671454 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9896343c-5af3-4c19-84e2-acddddf062e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.671467 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rp4g\" (UniqueName: \"kubernetes.io/projected/9896343c-5af3-4c19-84e2-acddddf062e3-kube-api-access-8rp4g\") on node \"crc\" DevicePath \"\"" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.835242 4792 generic.go:334] "Generic (PLEG): container finished" podID="9896343c-5af3-4c19-84e2-acddddf062e3" containerID="3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb" exitCode=0 Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.835294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerDied","Data":"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb"} Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.835332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4gx7" event={"ID":"9896343c-5af3-4c19-84e2-acddddf062e3","Type":"ContainerDied","Data":"b41e6e6751bfa3c2540202db01b51cee214d85ed0198fc2ab574690e43a14fe9"} Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.835354 4792 scope.go:117] "RemoveContainer" containerID="3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.835367 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4gx7" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.860977 4792 scope.go:117] "RemoveContainer" containerID="27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.900879 4792 scope.go:117] "RemoveContainer" containerID="13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.900906 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.917718 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4gx7"] Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.943992 4792 scope.go:117] "RemoveContainer" containerID="3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb" Feb 16 22:04:47 crc kubenswrapper[4792]: E0216 22:04:47.944760 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb\": container with ID starting with 3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb not found: ID does not exist" containerID="3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.944828 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb"} err="failed to get container status \"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb\": rpc error: code = NotFound desc = could not find container \"3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb\": container with ID starting with 3ca921eabc025af36397fcf087c7ecb0c7faa9cb3fa527732b6fbb26463c5bdb not found: ID does not exist" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.944872 4792 scope.go:117] "RemoveContainer" containerID="27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005" Feb 16 22:04:47 crc kubenswrapper[4792]: E0216 22:04:47.945635 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005\": container with ID starting with 27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005 not found: ID does not exist" containerID="27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.945691 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005"} err="failed to get container status \"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005\": rpc error: code = NotFound desc = could not find container \"27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005\": container with ID starting with 27808c95fed9497cdc8896646dad5c5dc60090800a7006615859356a25ac4005 not found: ID does not exist" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.945720 4792 scope.go:117] "RemoveContainer" containerID="13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738" Feb 16 22:04:47 crc kubenswrapper[4792]: E0216 22:04:47.946248 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738\": container with ID starting with 13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738 not found: ID does not exist" containerID="13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738" Feb 16 22:04:47 crc kubenswrapper[4792]: I0216 22:04:47.946297 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738"} err="failed to get container status \"13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738\": rpc error: code = NotFound desc = could not find container \"13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738\": container with ID starting with 13175b562de9fe0cc6f766d4bb561c05165a5a1cf0e722d0636243255ea39738 not found: ID does not exist" Feb 16 22:04:48 crc kubenswrapper[4792]: I0216 22:04:48.042474 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" path="/var/lib/kubelet/pods/9896343c-5af3-4c19-84e2-acddddf062e3/volumes" Feb 16 22:04:51 crc kubenswrapper[4792]: I0216 22:04:51.026524 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:04:51 crc kubenswrapper[4792]: E0216 22:04:51.028269 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:04:56 crc kubenswrapper[4792]: I0216 22:04:56.780072 4792 scope.go:117] "RemoveContainer" containerID="d2b8bc3e0f5096593470ee6cb457091a4effafd8290fe14545303aa7648d35a7" Feb 16 22:04:56 crc kubenswrapper[4792]: I0216 22:04:56.838238 4792 scope.go:117] "RemoveContainer" containerID="6492ad36f33c8e7001262910a59cafca97908ab648f406003297b7c2fc2e33e0" Feb 16 22:04:56 crc kubenswrapper[4792]: I0216 22:04:56.898966 4792 scope.go:117] "RemoveContainer" containerID="f05449ebc304e7a6125ca00b8374fada11f603f0318a4f7546ea0cfa9094ca70" Feb 16 22:04:56 crc kubenswrapper[4792]: I0216 22:04:56.961739 4792 scope.go:117] "RemoveContainer" containerID="96f3c67fef5fa3064328203bcaa69ecbd74d3ab11c1d0ca0b014261a3b51bd3e" Feb 16 22:04:57 crc kubenswrapper[4792]: I0216 22:04:57.000800 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 22:04:57 crc kubenswrapper[4792]: I0216 22:04:57.072886 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:04:59 crc kubenswrapper[4792]: E0216 22:04:59.028571 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:05:00 crc kubenswrapper[4792]: E0216 22:05:00.029472 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:05:01 crc kubenswrapper[4792]: I0216 22:05:01.240275 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="rabbitmq" containerID="cri-o://b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d" gracePeriod=604796 Feb 16 22:05:05 crc kubenswrapper[4792]: I0216 22:05:05.027424 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:05:05 crc kubenswrapper[4792]: E0216 22:05:05.028303 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.006120 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.098122 4792 generic.go:334] "Generic (PLEG): container finished" podID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerID="b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d" exitCode=0 Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.098162 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerDied","Data":"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d"} Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.098213 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0b0738-c9c3-4b4f-86a2-8bb113270613","Type":"ContainerDied","Data":"4e76442976f4fe438029d0ca9d3c5049b91d5c4914ba80f2128fd10dc25f281a"} Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.098230 4792 scope.go:117] "RemoveContainer" containerID="b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.098462 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.133541 4792 scope.go:117] "RemoveContainer" containerID="bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.165835 4792 scope.go:117] "RemoveContainer" containerID="b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.167267 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d\": container with ID starting with b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d not found: ID does not exist" containerID="b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.167308 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d"} err="failed to get container status \"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d\": rpc error: code = NotFound desc = could not find container \"b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d\": container with ID starting with b40cce30924a17f9ffcd18ee3f84bb247cb3b4793f87ada0fe3bc8c21596999d not found: ID does not exist" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.167350 4792 scope.go:117] "RemoveContainer" containerID="bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.167696 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a\": container with ID starting with bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a not found: ID does not exist" containerID="bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.167725 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a"} err="failed to get container status \"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a\": rpc error: code = NotFound desc = could not find container \"bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a\": container with ID starting with bdd62980cc6b783c39a29c193d70d1a5ed556ffc1bbf970918064bb13eb60d8a not found: ID does not exist" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.195782 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.195887 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.195993 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.196083 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.196123 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5l7v\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.196840 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.196880 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.196943 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.197078 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.197114 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.197170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data\") pod \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\" (UID: \"9b0b0738-c9c3-4b4f-86a2-8bb113270613\") " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.198433 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.199281 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.200132 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.212399 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.220433 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v" (OuterVolumeSpecName: "kube-api-access-n5l7v") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "kube-api-access-n5l7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.228501 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.240872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304561 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304610 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0b0738-c9c3-4b4f-86a2-8bb113270613-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304619 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0b0738-c9c3-4b4f-86a2-8bb113270613-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304630 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304639 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5l7v\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-kube-api-access-n5l7v\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304648 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.304658 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.338945 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data" (OuterVolumeSpecName: "config-data") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.406355 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.446179 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.509045 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0b0738-c9c3-4b4f-86a2-8bb113270613-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.511579 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7" (OuterVolumeSpecName: "persistence") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "pvc-2f03824c-a751-4d46-98e2-085e0e680ee7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.523018 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b0b0738-c9c3-4b4f-86a2-8bb113270613" (UID: "9b0b0738-c9c3-4b4f-86a2-8bb113270613"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.610887 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") on node \"crc\" " Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.610931 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0b0738-c9c3-4b4f-86a2-8bb113270613-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.639206 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.639341 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2f03824c-a751-4d46-98e2-085e0e680ee7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7") on node "crc" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.713664 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") on node \"crc\" DevicePath \"\"" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.753337 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.768169 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.791663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.792338 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="extract-utilities" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792366 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="extract-utilities" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.792421 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="setup-container" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792435 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="setup-container" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.792449 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="extract-content" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792460 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="extract-content" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.792477 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="registry-server" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792487 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="registry-server" Feb 16 22:05:08 crc kubenswrapper[4792]: E0216 22:05:08.792544 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="rabbitmq" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792555 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="rabbitmq" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792958 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9896343c-5af3-4c19-84e2-acddddf062e3" containerName="registry-server" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.792989 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" containerName="rabbitmq" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.795199 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.828974 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.917886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rb72\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-kube-api-access-6rb72\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918441 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918524 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918620 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd000b08-b38a-4541-959f-e1c3151131d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918811 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918855 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:08 crc kubenswrapper[4792]: I0216 22:05:08.918954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd000b08-b38a-4541-959f-e1c3151131d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.021930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd000b08-b38a-4541-959f-e1c3151131d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022182 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd000b08-b38a-4541-959f-e1c3151131d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022377 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rb72\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-kube-api-access-6rb72\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022435 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022523 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.022945 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.023172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.023283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.023840 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.026396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.027350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd000b08-b38a-4541-959f-e1c3151131d6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.028329 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.029070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd000b08-b38a-4541-959f-e1c3151131d6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.032228 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.032259 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4a7b9fb20bf9a324e2b8a4fd513a909868f60c1cc47520451461303a70b0b164/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.038425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd000b08-b38a-4541-959f-e1c3151131d6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.043377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rb72\" (UniqueName: \"kubernetes.io/projected/bd000b08-b38a-4541-959f-e1c3151131d6-kube-api-access-6rb72\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.095417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f03824c-a751-4d46-98e2-085e0e680ee7\") pod \"rabbitmq-server-0\" (UID: \"bd000b08-b38a-4541-959f-e1c3151131d6\") " pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.121833 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 22:05:09 crc kubenswrapper[4792]: W0216 22:05:09.686186 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd000b08_b38a_4541_959f_e1c3151131d6.slice/crio-ea9f054c82c092bd2e61589c54f58c6d9bf35a388a8cd12ae0f10fd018f9471f WatchSource:0}: Error finding container ea9f054c82c092bd2e61589c54f58c6d9bf35a388a8cd12ae0f10fd018f9471f: Status 404 returned error can't find the container with id ea9f054c82c092bd2e61589c54f58c6d9bf35a388a8cd12ae0f10fd018f9471f Feb 16 22:05:09 crc kubenswrapper[4792]: I0216 22:05:09.695790 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 22:05:10 crc kubenswrapper[4792]: I0216 22:05:10.039235 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0b0738-c9c3-4b4f-86a2-8bb113270613" path="/var/lib/kubelet/pods/9b0b0738-c9c3-4b4f-86a2-8bb113270613/volumes" Feb 16 22:05:10 crc kubenswrapper[4792]: I0216 22:05:10.121640 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd000b08-b38a-4541-959f-e1c3151131d6","Type":"ContainerStarted","Data":"ea9f054c82c092bd2e61589c54f58c6d9bf35a388a8cd12ae0f10fd018f9471f"} Feb 16 22:05:12 crc kubenswrapper[4792]: I0216 22:05:12.147247 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd000b08-b38a-4541-959f-e1c3151131d6","Type":"ContainerStarted","Data":"460f65098ed76116ae4096bb57a4c511b0530ac5ddfe224dffb043a57c794a78"} Feb 16 22:05:13 crc kubenswrapper[4792]: E0216 22:05:13.029304 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:05:13 crc kubenswrapper[4792]: E0216 22:05:13.029650 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:05:18 crc kubenswrapper[4792]: I0216 22:05:18.039629 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:05:18 crc kubenswrapper[4792]: E0216 22:05:18.041905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:05:26 crc kubenswrapper[4792]: E0216 22:05:26.046269 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:05:28 crc kubenswrapper[4792]: E0216 22:05:28.043016 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:05:29 crc kubenswrapper[4792]: I0216 22:05:29.026253 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:05:29 crc kubenswrapper[4792]: E0216 22:05:29.027311 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:05:39 crc kubenswrapper[4792]: E0216 22:05:39.028816 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:05:39 crc kubenswrapper[4792]: E0216 22:05:39.028834 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:05:43 crc kubenswrapper[4792]: I0216 22:05:43.026172 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:05:43 crc kubenswrapper[4792]: E0216 22:05:43.026927 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:05:44 crc kubenswrapper[4792]: I0216 22:05:44.504349 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd000b08-b38a-4541-959f-e1c3151131d6" containerID="460f65098ed76116ae4096bb57a4c511b0530ac5ddfe224dffb043a57c794a78" exitCode=0 Feb 16 22:05:44 crc kubenswrapper[4792]: I0216 22:05:44.504427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd000b08-b38a-4541-959f-e1c3151131d6","Type":"ContainerDied","Data":"460f65098ed76116ae4096bb57a4c511b0530ac5ddfe224dffb043a57c794a78"} Feb 16 22:05:45 crc kubenswrapper[4792]: I0216 22:05:45.515935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd000b08-b38a-4541-959f-e1c3151131d6","Type":"ContainerStarted","Data":"b7c05ba47441d53bd863531f509a672afdf378dd6676cf870bdccdb2ead66aa3"} Feb 16 22:05:45 crc kubenswrapper[4792]: I0216 22:05:45.516511 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 22:05:45 crc kubenswrapper[4792]: I0216 22:05:45.561387 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.561368319 podStartE2EDuration="37.561368319s" podCreationTimestamp="2026-02-16 22:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:05:45.551938414 +0000 UTC m=+1678.205217305" watchObservedRunningTime="2026-02-16 22:05:45.561368319 +0000 UTC m=+1678.214647210" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.144225 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.144805 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.144966 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.146094 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.150519 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.150557 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.150724 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:05:53 crc kubenswrapper[4792]: E0216 22:05:53.152012 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:05:57 crc kubenswrapper[4792]: I0216 22:05:57.173936 4792 scope.go:117] "RemoveContainer" containerID="4ac96c6c4f416fc908217311796ababed49263c9f5015ac1f809158960ff2e5a" Feb 16 22:05:57 crc kubenswrapper[4792]: I0216 22:05:57.209745 4792 scope.go:117] "RemoveContainer" containerID="39f0e39f59fbaa62eb8d053c3f37df585719ffd0c68cf325fcf2debf0fdfafc7" Feb 16 22:05:58 crc kubenswrapper[4792]: I0216 22:05:58.035280 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:05:58 crc kubenswrapper[4792]: E0216 22:05:58.035929 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:05:59 crc kubenswrapper[4792]: I0216 22:05:59.124770 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 22:06:05 crc kubenswrapper[4792]: E0216 22:06:05.028856 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:06:08 crc kubenswrapper[4792]: E0216 22:06:08.040678 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:06:13 crc kubenswrapper[4792]: I0216 22:06:13.027104 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:06:13 crc kubenswrapper[4792]: E0216 22:06:13.028041 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:06:18 crc kubenswrapper[4792]: E0216 22:06:18.042061 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:06:19 crc kubenswrapper[4792]: E0216 22:06:19.028790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:06:27 crc kubenswrapper[4792]: I0216 22:06:27.027173 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:06:27 crc kubenswrapper[4792]: E0216 22:06:27.027868 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:06:29 crc kubenswrapper[4792]: E0216 22:06:29.029567 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:06:34 crc kubenswrapper[4792]: E0216 22:06:34.029241 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:06:38 crc kubenswrapper[4792]: I0216 22:06:38.034511 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:06:38 crc kubenswrapper[4792]: E0216 22:06:38.035295 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:06:43 crc kubenswrapper[4792]: E0216 22:06:43.028567 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:06:47 crc kubenswrapper[4792]: E0216 22:06:47.030016 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:06:50 crc kubenswrapper[4792]: I0216 22:06:50.026577 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:06:50 crc kubenswrapper[4792]: E0216 22:06:50.027705 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:06:55 crc kubenswrapper[4792]: E0216 22:06:55.029049 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.322892 4792 scope.go:117] "RemoveContainer" containerID="0841c2f2da3cb21170af22daed7a33f914e8e42380b26518a4a9960680d91dd8" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.360684 4792 scope.go:117] "RemoveContainer" containerID="fc347e4348bb0d11e8b36f98b21bb931bcc37f4b92d7dc6f987dfda108c16ca8" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.388479 4792 scope.go:117] "RemoveContainer" containerID="210f8a4c954205057af3fdeb1521ea040821c44ba58ac9d42deb19082122ec2b" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.416851 4792 scope.go:117] "RemoveContainer" containerID="1ab296b49a469f4f67f81284ebda7a4950a5b2db94751c191379c6101f646019" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.450625 4792 scope.go:117] "RemoveContainer" containerID="5b4e3d26546f4579929b2d5fdc7cf4dcefa6a5adf946afeb5f1c6959e9495926" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.472769 4792 scope.go:117] "RemoveContainer" containerID="2f0db2c562133693caa8111d5ac50ef75ec1c3c2b171fd2a807ee3a40d9ea9b0" Feb 16 22:06:57 crc kubenswrapper[4792]: I0216 22:06:57.500293 4792 scope.go:117] "RemoveContainer" containerID="e1a001bdd889500b588eb1b797e9317fff6a976c6834cb04dd76892a3f9e0f80" Feb 16 22:06:59 crc kubenswrapper[4792]: E0216 22:06:59.029575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:07:05 crc kubenswrapper[4792]: I0216 22:07:05.027031 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:07:05 crc kubenswrapper[4792]: E0216 22:07:05.027707 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:07:05 crc kubenswrapper[4792]: I0216 22:07:05.492162 4792 generic.go:334] "Generic (PLEG): container finished" podID="425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" containerID="3eaaa81a6fe5a1c50ad2754ae1c6683d085542cec2d917508c7c00a3d66a24ed" exitCode=0 Feb 16 22:07:05 crc kubenswrapper[4792]: I0216 22:07:05.492214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" event={"ID":"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8","Type":"ContainerDied","Data":"3eaaa81a6fe5a1c50ad2754ae1c6683d085542cec2d917508c7c00a3d66a24ed"} Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.054280 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6372-account-create-update-tsd5b"] Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.062859 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.066678 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6372-account-create-update-tsd5b"] Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.117827 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4k6\" (UniqueName: \"kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6\") pod \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.118104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory\") pod \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.118238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle\") pod \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.118388 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam\") pod \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\" (UID: \"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8\") " Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.126545 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" (UID: "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.127172 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6" (OuterVolumeSpecName: "kube-api-access-fk4k6") pod "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" (UID: "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8"). InnerVolumeSpecName "kube-api-access-fk4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.154892 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" (UID: "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.168836 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory" (OuterVolumeSpecName: "inventory") pod "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" (UID: "425f7d1f-0118-4ce5-95f5-a6f2a336dfa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.221461 4792 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.221487 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.221496 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4k6\" (UniqueName: \"kubernetes.io/projected/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-kube-api-access-fk4k6\") on node \"crc\" DevicePath \"\"" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.221507 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425f7d1f-0118-4ce5-95f5-a6f2a336dfa8-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.512955 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" event={"ID":"425f7d1f-0118-4ce5-95f5-a6f2a336dfa8","Type":"ContainerDied","Data":"113adc5b2556a6d3d60bc3f52e6a1708ff79fa2afa547db63e3851ae6ac1dab4"} Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.512992 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113adc5b2556a6d3d60bc3f52e6a1708ff79fa2afa547db63e3851ae6ac1dab4" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.513017 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.629766 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp"] Feb 16 22:07:07 crc kubenswrapper[4792]: E0216 22:07:07.630733 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.630762 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.631072 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="425f7d1f-0118-4ce5-95f5-a6f2a336dfa8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.632154 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.639152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.639151 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.639346 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.639371 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.669105 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp"] Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.734747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.734926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txk2l\" (UniqueName: \"kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.735014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.836869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txk2l\" (UniqueName: \"kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.836945 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.837073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.841967 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.844182 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.856845 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txk2l\" (UniqueName: \"kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:07 crc kubenswrapper[4792]: I0216 22:07:07.962087 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:07:08 crc kubenswrapper[4792]: E0216 22:07:08.046024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.050808 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67a67b7-bc6b-438b-8802-a81b934c2135" path="/var/lib/kubelet/pods/f67a67b7-bc6b-438b-8802-a81b934c2135/volumes" Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.052816 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tzwjt"] Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.070536 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jl449"] Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.083229 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tzwjt"] Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.096492 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jl449"] Feb 16 22:07:08 crc kubenswrapper[4792]: I0216 22:07:08.592696 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.037566 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-92e9-account-create-update-g97nz"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.055419 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-92e9-account-create-update-g97nz"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.070473 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qjr26"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.086723 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5331-account-create-update-qsq8t"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.132768 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qjr26"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.140416 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5331-account-create-update-qsq8t"] Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.538178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" event={"ID":"65f41687-f567-41a0-8ec2-3ac03e464ebe","Type":"ContainerStarted","Data":"73df445c8525ad599c27870a5217886f28fb547e00d62aabea95b6c850ea3308"} Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.538502 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" event={"ID":"65f41687-f567-41a0-8ec2-3ac03e464ebe","Type":"ContainerStarted","Data":"d478d4f3f37ea5166d7ac01f374d236d8bdf6da73069fcf52ac64fcabd6e1aa6"} Feb 16 22:07:09 crc kubenswrapper[4792]: I0216 22:07:09.610044 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" podStartSLOduration=2.049716527 podStartE2EDuration="2.610019009s" podCreationTimestamp="2026-02-16 22:07:07 +0000 UTC" firstStartedPulling="2026-02-16 22:07:08.597202621 +0000 UTC m=+1761.250481532" lastFinishedPulling="2026-02-16 22:07:09.157505123 +0000 UTC m=+1761.810784014" observedRunningTime="2026-02-16 22:07:09.592687158 +0000 UTC m=+1762.245966049" watchObservedRunningTime="2026-02-16 22:07:09.610019009 +0000 UTC m=+1762.263297900" Feb 16 22:07:10 crc kubenswrapper[4792]: E0216 22:07:10.029655 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.053221 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49265dfe-072f-483c-a891-510f3b17498c" path="/var/lib/kubelet/pods/49265dfe-072f-483c-a891-510f3b17498c/volumes" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.054585 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9607ed45-f58d-4edc-8f15-069b36ce8ce1" path="/var/lib/kubelet/pods/9607ed45-f58d-4edc-8f15-069b36ce8ce1/volumes" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.055247 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade7459b-8627-4e5e-a075-e86a88b9eaf0" path="/var/lib/kubelet/pods/ade7459b-8627-4e5e-a075-e86a88b9eaf0/volumes" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.056317 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa06059-0788-46d7-b688-68141d71b288" path="/var/lib/kubelet/pods/baa06059-0788-46d7-b688-68141d71b288/volumes" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.057426 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee1fc47-fd26-4e80-9640-960ee64b5839" path="/var/lib/kubelet/pods/eee1fc47-fd26-4e80-9640-960ee64b5839/volumes" Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.058136 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-r9f8h"] Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.058163 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d07e-account-create-update-x8jwp"] Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.058177 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-r9f8h"] Feb 16 22:07:10 crc kubenswrapper[4792]: I0216 22:07:10.068765 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d07e-account-create-update-x8jwp"] Feb 16 22:07:12 crc kubenswrapper[4792]: I0216 22:07:12.042231 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d90353-5fb7-4eca-878f-fe0ce1e0a5a4" path="/var/lib/kubelet/pods/29d90353-5fb7-4eca-878f-fe0ce1e0a5a4/volumes" Feb 16 22:07:12 crc kubenswrapper[4792]: I0216 22:07:12.046040 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ede8625-b8a4-4d49-abc2-9c4fb8edab4e" path="/var/lib/kubelet/pods/2ede8625-b8a4-4d49-abc2-9c4fb8edab4e/volumes" Feb 16 22:07:16 crc kubenswrapper[4792]: I0216 22:07:16.026235 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:07:16 crc kubenswrapper[4792]: E0216 22:07:16.027045 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:07:19 crc kubenswrapper[4792]: E0216 22:07:19.028902 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:07:20 crc kubenswrapper[4792]: I0216 22:07:20.038163 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-scwfx"] Feb 16 22:07:20 crc kubenswrapper[4792]: I0216 22:07:20.047365 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-scwfx"] Feb 16 22:07:21 crc kubenswrapper[4792]: I0216 22:07:21.036148 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b08c-account-create-update-2gzj8"] Feb 16 22:07:21 crc kubenswrapper[4792]: I0216 22:07:21.047878 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b08c-account-create-update-2gzj8"] Feb 16 22:07:22 crc kubenswrapper[4792]: I0216 22:07:22.041942 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521cf6b2-e2cf-4ae6-a34c-71e15d93916f" path="/var/lib/kubelet/pods/521cf6b2-e2cf-4ae6-a34c-71e15d93916f/volumes" Feb 16 22:07:22 crc kubenswrapper[4792]: I0216 22:07:22.043067 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49abd6e-b475-4ad2-a88a-c0dc37ab2997" path="/var/lib/kubelet/pods/b49abd6e-b475-4ad2-a88a-c0dc37ab2997/volumes" Feb 16 22:07:25 crc kubenswrapper[4792]: E0216 22:07:25.029874 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:07:30 crc kubenswrapper[4792]: I0216 22:07:30.026846 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:07:30 crc kubenswrapper[4792]: E0216 22:07:30.027972 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:07:34 crc kubenswrapper[4792]: E0216 22:07:34.029583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:07:37 crc kubenswrapper[4792]: I0216 22:07:37.063838 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwdsh"] Feb 16 22:07:37 crc kubenswrapper[4792]: I0216 22:07:37.077474 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwdsh"] Feb 16 22:07:38 crc kubenswrapper[4792]: E0216 22:07:38.029371 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:07:38 crc kubenswrapper[4792]: I0216 22:07:38.044056 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc980837-58f8-41b6-97a5-f210e7fd10d0" path="/var/lib/kubelet/pods/fc980837-58f8-41b6-97a5-f210e7fd10d0/volumes" Feb 16 22:07:43 crc kubenswrapper[4792]: I0216 22:07:43.026514 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:07:43 crc kubenswrapper[4792]: E0216 22:07:43.028065 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:07:43 crc kubenswrapper[4792]: I0216 22:07:43.049103 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8982l"] Feb 16 22:07:43 crc kubenswrapper[4792]: I0216 22:07:43.066189 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8982l"] Feb 16 22:07:44 crc kubenswrapper[4792]: I0216 22:07:44.039163 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63303797-e14d-4091-ab14-8be69dd506ad" path="/var/lib/kubelet/pods/63303797-e14d-4091-ab14-8be69dd506ad/volumes" Feb 16 22:07:49 crc kubenswrapper[4792]: E0216 22:07:49.029482 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:07:49 crc kubenswrapper[4792]: E0216 22:07:49.029647 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:07:56 crc kubenswrapper[4792]: I0216 22:07:56.027130 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:07:56 crc kubenswrapper[4792]: E0216 22:07:56.029322 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.636060 4792 scope.go:117] "RemoveContainer" containerID="68bdb856e434dfda4401fbdb963fdf8c70d9f0e5b81d9115ef0b2c13a64432d6" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.664831 4792 scope.go:117] "RemoveContainer" containerID="7ae24978e92b225bb03d73c08958d4f53f8f9beb9dfd2bd4874c8732387b2260" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.738120 4792 scope.go:117] "RemoveContainer" containerID="ee1cd0327853fae844b7c25f8e66e22210410a0970726941b3a0ae69286447a5" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.809776 4792 scope.go:117] "RemoveContainer" containerID="efde2ac3826c82fd210c05b8b6f844a98d01c07293e97df4cf91cdadb1f2e197" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.864446 4792 scope.go:117] "RemoveContainer" containerID="30f031ce65b8dcaf0bda33292307d349ba80835e9778dd80115920fbf32d0d54" Feb 16 22:07:57 crc kubenswrapper[4792]: I0216 22:07:57.939108 4792 scope.go:117] "RemoveContainer" containerID="bfd28e1726d7ef61ed7cd5f2f6e68412f52a638a0116f8943801f6cdefaa71d8" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.002705 4792 scope.go:117] "RemoveContainer" containerID="95b8bcd10b57a60cb16862e81771772c1735b8ce8d190126ff999cc7e692ac85" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.032954 4792 scope.go:117] "RemoveContainer" containerID="b663d0075946f0a904abe40156d2586468fb1f0a28da1e515cf4fa2d18416f48" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.063863 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r7rzq"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.074651 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f4de-account-create-update-n2r2d"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.094229 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r7rzq"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.095174 4792 scope.go:117] "RemoveContainer" containerID="6dc269f7ab8c1f41f052d48b7b6a698a2abf8c93e5b024d26b1c43bdb7daff34" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.122469 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4a27-account-create-update-ljhjm"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.134712 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-p6tpn"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.145763 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f4de-account-create-update-n2r2d"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.147909 4792 scope.go:117] "RemoveContainer" containerID="a786f13858cd51ec5e6296f07241ef244e5cc436745f9c0aabe763c8b9e933d7" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.156760 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4a27-account-create-update-ljhjm"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.165444 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-p6tpn"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.169451 4792 scope.go:117] "RemoveContainer" containerID="510fe6772647da03c5a6805a5d078b9c24b97851379fb669d2c16bdb94cc9938" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.175029 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-x996w"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.185898 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-x996w"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.195732 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6hn4j"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.196122 4792 scope.go:117] "RemoveContainer" containerID="85a4c3d670fbf7833eb37099b9949151e0492b4cee3e4ab8b6c083612cae3570" Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.205655 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-989d-account-create-update-5x2fg"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.216754 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6hn4j"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.226623 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-989d-account-create-update-5x2fg"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.237338 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f6a4-account-create-update-tnht4"] Feb 16 22:07:58 crc kubenswrapper[4792]: I0216 22:07:58.247217 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f6a4-account-create-update-tnht4"] Feb 16 22:08:00 crc kubenswrapper[4792]: E0216 22:08:00.028631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.041185 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4034f818-c02e-451d-92ae-ebf4deb873ab" path="/var/lib/kubelet/pods/4034f818-c02e-451d-92ae-ebf4deb873ab/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.042170 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646151e2-5537-4de8-a366-f2e2aa64a307" path="/var/lib/kubelet/pods/646151e2-5537-4de8-a366-f2e2aa64a307/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.043914 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7463b1e3-c90a-4525-a6d4-6d7892578aae" path="/var/lib/kubelet/pods/7463b1e3-c90a-4525-a6d4-6d7892578aae/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.044833 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ee292f-3c7f-4131-8f57-682fe8679f15" path="/var/lib/kubelet/pods/88ee292f-3c7f-4131-8f57-682fe8679f15/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.046281 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80bdd05-0def-4f41-a14a-5ad83cd6428f" path="/var/lib/kubelet/pods/b80bdd05-0def-4f41-a14a-5ad83cd6428f/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.047802 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf27831-30f8-406a-a277-c6e61987fe35" path="/var/lib/kubelet/pods/bcf27831-30f8-406a-a277-c6e61987fe35/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.049344 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8ad371-835d-4087-b6c5-00576bc60ab8" path="/var/lib/kubelet/pods/be8ad371-835d-4087-b6c5-00576bc60ab8/volumes" Feb 16 22:08:00 crc kubenswrapper[4792]: I0216 22:08:00.051291 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9482b7-b9a0-4114-92d2-be2276447412" path="/var/lib/kubelet/pods/eb9482b7-b9a0-4114-92d2-be2276447412/volumes" Feb 16 22:08:02 crc kubenswrapper[4792]: E0216 22:08:02.028880 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:08:02 crc kubenswrapper[4792]: I0216 22:08:02.046742 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sjs8x"] Feb 16 22:08:02 crc kubenswrapper[4792]: I0216 22:08:02.058044 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sjs8x"] Feb 16 22:08:04 crc kubenswrapper[4792]: I0216 22:08:04.039317 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b77bea6-4e1c-42d4-a33c-da52abd756a6" path="/var/lib/kubelet/pods/2b77bea6-4e1c-42d4-a33c-da52abd756a6/volumes" Feb 16 22:08:07 crc kubenswrapper[4792]: I0216 22:08:07.026171 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:08:07 crc kubenswrapper[4792]: E0216 22:08:07.027631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:08:15 crc kubenswrapper[4792]: E0216 22:08:15.030225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:08:16 crc kubenswrapper[4792]: E0216 22:08:16.028028 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:08:20 crc kubenswrapper[4792]: I0216 22:08:20.026869 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:08:20 crc kubenswrapper[4792]: E0216 22:08:20.027951 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:08:29 crc kubenswrapper[4792]: E0216 22:08:29.027905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:08:29 crc kubenswrapper[4792]: E0216 22:08:29.030157 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:08:30 crc kubenswrapper[4792]: I0216 22:08:30.058462 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mg87r"] Feb 16 22:08:30 crc kubenswrapper[4792]: I0216 22:08:30.075301 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mg87r"] Feb 16 22:08:32 crc kubenswrapper[4792]: I0216 22:08:32.066113 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c" path="/var/lib/kubelet/pods/23f6bbcf-4bb4-478e-b6a7-d5f1eb66ec7c/volumes" Feb 16 22:08:34 crc kubenswrapper[4792]: I0216 22:08:34.029364 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:08:34 crc kubenswrapper[4792]: I0216 22:08:34.585483 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42"} Feb 16 22:08:36 crc kubenswrapper[4792]: I0216 22:08:36.055056 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7vsw9"] Feb 16 22:08:36 crc kubenswrapper[4792]: I0216 22:08:36.070427 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7vsw9"] Feb 16 22:08:38 crc kubenswrapper[4792]: I0216 22:08:38.042114 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64774f1f-f141-4fad-a226-1ac6b3a93782" path="/var/lib/kubelet/pods/64774f1f-f141-4fad-a226-1ac6b3a93782/volumes" Feb 16 22:08:40 crc kubenswrapper[4792]: I0216 22:08:40.032636 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:08:40 crc kubenswrapper[4792]: E0216 22:08:40.130279 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:08:40 crc kubenswrapper[4792]: E0216 22:08:40.130562 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:08:40 crc kubenswrapper[4792]: E0216 22:08:40.130802 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:08:40 crc kubenswrapper[4792]: E0216 22:08:40.132444 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:08:41 crc kubenswrapper[4792]: I0216 22:08:41.042056 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jsrtw"] Feb 16 22:08:41 crc kubenswrapper[4792]: I0216 22:08:41.052233 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4qx2s"] Feb 16 22:08:41 crc kubenswrapper[4792]: I0216 22:08:41.061934 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4qx2s"] Feb 16 22:08:41 crc kubenswrapper[4792]: I0216 22:08:41.072377 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jsrtw"] Feb 16 22:08:42 crc kubenswrapper[4792]: I0216 22:08:42.046833 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7c29a5-bb18-4493-99b4-63546d7bffc8" path="/var/lib/kubelet/pods/4f7c29a5-bb18-4493-99b4-63546d7bffc8/volumes" Feb 16 22:08:42 crc kubenswrapper[4792]: I0216 22:08:42.049504 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b62519-345c-4ed1-b2cc-63186693467d" path="/var/lib/kubelet/pods/92b62519-345c-4ed1-b2cc-63186693467d/volumes" Feb 16 22:08:44 crc kubenswrapper[4792]: E0216 22:08:44.146409 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:08:44 crc kubenswrapper[4792]: E0216 22:08:44.146838 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:08:44 crc kubenswrapper[4792]: E0216 22:08:44.146980 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:08:44 crc kubenswrapper[4792]: E0216 22:08:44.148208 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:08:51 crc kubenswrapper[4792]: E0216 22:08:51.029041 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:08:55 crc kubenswrapper[4792]: E0216 22:08:55.029220 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:08:56 crc kubenswrapper[4792]: I0216 22:08:56.075388 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jvjtg"] Feb 16 22:08:56 crc kubenswrapper[4792]: I0216 22:08:56.092853 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jvjtg"] Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.038475 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6432216a-a549-4060-8369-b6a0d86f1ba2" path="/var/lib/kubelet/pods/6432216a-a549-4060-8369-b6a0d86f1ba2/volumes" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.592948 4792 scope.go:117] "RemoveContainer" containerID="272562e601ee35eb182c445747fc389a6c22272eee06ea75d29521a0c0774033" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.630977 4792 scope.go:117] "RemoveContainer" containerID="70ebae63c739e4cbe7307aaeb7839559d1ff4df39eab94e8efa35058cc0a18c9" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.699003 4792 scope.go:117] "RemoveContainer" containerID="73edc24681473eecf50dfa8dbe83f85015c9c7bf6eec8a79117e2599acec5666" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.752329 4792 scope.go:117] "RemoveContainer" containerID="fb4c5a8b42e6c0cec9da4150d6f7e7ab23fc96ee7b9a286cbb3aee474bcf29b2" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.807015 4792 scope.go:117] "RemoveContainer" containerID="8cdb18e83507104ae90346580353c7398167b9112d9bd849b693bad27f548046" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.871587 4792 scope.go:117] "RemoveContainer" containerID="0d97562f245edff7e667772debe2af2b3722ed6710e10c772c0d145d308f9bf8" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.926279 4792 scope.go:117] "RemoveContainer" containerID="106c365e149408f83cdf4810688160480cb6b3a7fdd7e5a03c0cc9ff6385e9ef" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.952050 4792 scope.go:117] "RemoveContainer" containerID="77c29ffe59fe1d03ac0877a65d3075a6c761bd42cdb9f6b2c2e8787086faa429" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.973949 4792 scope.go:117] "RemoveContainer" containerID="029bae05d728ad2b343e88dbf0d0ffaa3e2ec37322443cf9db14af4b0d14ddb6" Feb 16 22:08:58 crc kubenswrapper[4792]: I0216 22:08:58.999823 4792 scope.go:117] "RemoveContainer" containerID="10c66b0ccfd225fa0795e614048cd558fe795172ef58fc81d5ab670419caea4c" Feb 16 22:08:59 crc kubenswrapper[4792]: I0216 22:08:59.025752 4792 scope.go:117] "RemoveContainer" containerID="9140ed259305e859ec4639f76119ac046ec64742dfcbe6946acdb68b95ba7a55" Feb 16 22:08:59 crc kubenswrapper[4792]: I0216 22:08:59.045266 4792 scope.go:117] "RemoveContainer" containerID="b4de07b889d2b2b23a99c349151052c0b355c503d3dc19562cd48a2b5c241d21" Feb 16 22:08:59 crc kubenswrapper[4792]: I0216 22:08:59.072692 4792 scope.go:117] "RemoveContainer" containerID="14e179d1594a1dad5a8b6bbc516a3156a3b7dfc968b1d4d68dc001b7f4b9502b" Feb 16 22:08:59 crc kubenswrapper[4792]: I0216 22:08:59.095654 4792 scope.go:117] "RemoveContainer" containerID="0da8855b787c27d78ff7c5127b606db896f4aac12c638aa59eb2510bbe276e34" Feb 16 22:09:02 crc kubenswrapper[4792]: E0216 22:09:02.031978 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:09:10 crc kubenswrapper[4792]: E0216 22:09:10.030804 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:09:14 crc kubenswrapper[4792]: E0216 22:09:14.030396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:09:23 crc kubenswrapper[4792]: E0216 22:09:23.028814 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:09:28 crc kubenswrapper[4792]: E0216 22:09:28.036659 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:09:36 crc kubenswrapper[4792]: E0216 22:09:36.028911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:09:39 crc kubenswrapper[4792]: E0216 22:09:39.029116 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:09:48 crc kubenswrapper[4792]: I0216 22:09:48.052880 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8bbwt"] Feb 16 22:09:48 crc kubenswrapper[4792]: I0216 22:09:48.065114 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8bbwt"] Feb 16 22:09:49 crc kubenswrapper[4792]: E0216 22:09:49.029647 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.045082 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b826e6-839e-4981-9c0e-1ae295f48f5b" path="/var/lib/kubelet/pods/25b826e6-839e-4981-9c0e-1ae295f48f5b/volumes" Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.060306 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-x7q8m"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.076224 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lz59p"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.086823 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-caca-account-create-update-rbbc9"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.095959 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-92cd-account-create-update-7vhk7"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.106654 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lz59p"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.115537 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-x7q8m"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.124284 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-caca-account-create-update-rbbc9"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.132616 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-92cd-account-create-update-7vhk7"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.141080 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-96ae-account-create-update-qpv9p"] Feb 16 22:09:50 crc kubenswrapper[4792]: I0216 22:09:50.149466 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-96ae-account-create-update-qpv9p"] Feb 16 22:09:52 crc kubenswrapper[4792]: I0216 22:09:52.041078 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0297de14-9244-4cda-93b7-a75b5ac58348" path="/var/lib/kubelet/pods/0297de14-9244-4cda-93b7-a75b5ac58348/volumes" Feb 16 22:09:52 crc kubenswrapper[4792]: I0216 22:09:52.042377 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a55719-97b7-4243-bfa3-e918b61ec76a" path="/var/lib/kubelet/pods/48a55719-97b7-4243-bfa3-e918b61ec76a/volumes" Feb 16 22:09:52 crc kubenswrapper[4792]: I0216 22:09:52.044082 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704c2346-0609-42f5-89da-db7d8950ea83" path="/var/lib/kubelet/pods/704c2346-0609-42f5-89da-db7d8950ea83/volumes" Feb 16 22:09:52 crc kubenswrapper[4792]: I0216 22:09:52.044719 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77f3054-3a84-4e5f-8c60-b5906b353be7" path="/var/lib/kubelet/pods/b77f3054-3a84-4e5f-8c60-b5906b353be7/volumes" Feb 16 22:09:52 crc kubenswrapper[4792]: I0216 22:09:52.046147 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5" path="/var/lib/kubelet/pods/bc17a1ee-2c1d-4f72-bcff-4d2d90b7f5f5/volumes" Feb 16 22:09:53 crc kubenswrapper[4792]: E0216 22:09:53.029724 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.377237 4792 scope.go:117] "RemoveContainer" containerID="442b8137dfec4f7543c25d6b561a36996e2d2bb50837b4b7d632c0d7a855f393" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.421064 4792 scope.go:117] "RemoveContainer" containerID="900470fd3943b2444291dbb3c44a9dc953e1dc8f8ba04f6a812cf15af9a91a9c" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.481354 4792 scope.go:117] "RemoveContainer" containerID="65cc72c66e6922ac3ace2620557de53d5d6a57924fa68fc1c84d73667a5a1615" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.571418 4792 scope.go:117] "RemoveContainer" containerID="25d8afdb9806799f24e58bdeb956bc822f941d5e88b1763a8eca4e422d7d234d" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.629345 4792 scope.go:117] "RemoveContainer" containerID="9ca90f8b09f16568a357c0e9444156e96f841b8c7311e461aa07ef03dcacf102" Feb 16 22:09:59 crc kubenswrapper[4792]: I0216 22:09:59.684031 4792 scope.go:117] "RemoveContainer" containerID="4de1185d51c9a3255491dfc5bddf05c12c939916cc016b92bc061fb1423e60fa" Feb 16 22:10:02 crc kubenswrapper[4792]: E0216 22:10:02.028486 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:10:08 crc kubenswrapper[4792]: E0216 22:10:08.037009 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:10:17 crc kubenswrapper[4792]: E0216 22:10:17.029710 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:10:19 crc kubenswrapper[4792]: I0216 22:10:19.064382 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-qcr7g"] Feb 16 22:10:19 crc kubenswrapper[4792]: I0216 22:10:19.075028 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-3730-account-create-update-m7svz"] Feb 16 22:10:19 crc kubenswrapper[4792]: I0216 22:10:19.086202 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-qcr7g"] Feb 16 22:10:19 crc kubenswrapper[4792]: I0216 22:10:19.095701 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-3730-account-create-update-m7svz"] Feb 16 22:10:20 crc kubenswrapper[4792]: I0216 22:10:20.051946 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee8442a-1298-42d2-ab10-ac48aabf89ae" path="/var/lib/kubelet/pods/4ee8442a-1298-42d2-ab10-ac48aabf89ae/volumes" Feb 16 22:10:20 crc kubenswrapper[4792]: I0216 22:10:20.056868 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa786547-92a7-41b6-9da0-98b1492e513f" path="/var/lib/kubelet/pods/fa786547-92a7-41b6-9da0-98b1492e513f/volumes" Feb 16 22:10:21 crc kubenswrapper[4792]: E0216 22:10:21.028592 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:10:25 crc kubenswrapper[4792]: I0216 22:10:25.032244 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bjbmf"] Feb 16 22:10:25 crc kubenswrapper[4792]: I0216 22:10:25.048083 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bjbmf"] Feb 16 22:10:26 crc kubenswrapper[4792]: I0216 22:10:26.048742 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba5cbbf-97a4-4785-9927-5e40e2b5fd7a" path="/var/lib/kubelet/pods/dba5cbbf-97a4-4785-9927-5e40e2b5fd7a/volumes" Feb 16 22:10:28 crc kubenswrapper[4792]: E0216 22:10:28.037932 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:10:36 crc kubenswrapper[4792]: E0216 22:10:36.029799 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:10:36 crc kubenswrapper[4792]: I0216 22:10:36.043077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-4zbd8"] Feb 16 22:10:36 crc kubenswrapper[4792]: I0216 22:10:36.053117 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-4zbd8"] Feb 16 22:10:38 crc kubenswrapper[4792]: I0216 22:10:38.048560 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daa36328-3bf1-4306-ba33-69217b14a2a5" path="/var/lib/kubelet/pods/daa36328-3bf1-4306-ba33-69217b14a2a5/volumes" Feb 16 22:10:41 crc kubenswrapper[4792]: E0216 22:10:41.028588 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:10:49 crc kubenswrapper[4792]: E0216 22:10:49.029851 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:10:51 crc kubenswrapper[4792]: I0216 22:10:51.030015 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgrtz"] Feb 16 22:10:51 crc kubenswrapper[4792]: I0216 22:10:51.042314 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgrtz"] Feb 16 22:10:52 crc kubenswrapper[4792]: I0216 22:10:52.041265 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1a09bd-f9f3-4fd9-89a8-c11010239591" path="/var/lib/kubelet/pods/2a1a09bd-f9f3-4fd9-89a8-c11010239591/volumes" Feb 16 22:10:52 crc kubenswrapper[4792]: I0216 22:10:52.042434 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wjlz9"] Feb 16 22:10:52 crc kubenswrapper[4792]: I0216 22:10:52.062835 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wjlz9"] Feb 16 22:10:54 crc kubenswrapper[4792]: I0216 22:10:54.038118 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aafdfa-5637-4a23-acd9-48d520e0d082" path="/var/lib/kubelet/pods/a7aafdfa-5637-4a23-acd9-48d520e0d082/volumes" Feb 16 22:10:56 crc kubenswrapper[4792]: E0216 22:10:56.091447 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:10:59 crc kubenswrapper[4792]: I0216 22:10:59.860346 4792 scope.go:117] "RemoveContainer" containerID="74aa7947b36e9def994c0f46e36b3dbbbf8d5597b5094129f133b66d1702aa79" Feb 16 22:10:59 crc kubenswrapper[4792]: I0216 22:10:59.894957 4792 scope.go:117] "RemoveContainer" containerID="d6a451fdd95bfe0e604149b6a8587432a2f6bf66af2f3c037b56076fb3a3343e" Feb 16 22:10:59 crc kubenswrapper[4792]: I0216 22:10:59.977863 4792 scope.go:117] "RemoveContainer" containerID="ad1e8f9f13b5ec4738719bfca21c43ab265cb3250c9dd389438b25fbf39ba7de" Feb 16 22:11:00 crc kubenswrapper[4792]: I0216 22:11:00.072048 4792 scope.go:117] "RemoveContainer" containerID="07e12e812f39afd826e3f07afb57b98715b747012433c2861430d4e813e455c8" Feb 16 22:11:00 crc kubenswrapper[4792]: E0216 22:11:00.072231 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:11:00 crc kubenswrapper[4792]: I0216 22:11:00.103879 4792 scope.go:117] "RemoveContainer" containerID="5193278bd80b544b0b0bc7373ff1a71db8c2cd2c7b9d25ac39cc5aca3a17f631" Feb 16 22:11:00 crc kubenswrapper[4792]: I0216 22:11:00.207553 4792 scope.go:117] "RemoveContainer" containerID="1b355a2a9768678a526868ec53d7fe2551627963c253a1a2e6f4b39661c3cf66" Feb 16 22:11:01 crc kubenswrapper[4792]: I0216 22:11:01.532577 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:11:01 crc kubenswrapper[4792]: I0216 22:11:01.532984 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:11:09 crc kubenswrapper[4792]: E0216 22:11:09.028570 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:11:11 crc kubenswrapper[4792]: E0216 22:11:11.028228 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:11:22 crc kubenswrapper[4792]: E0216 22:11:22.029878 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:11:23 crc kubenswrapper[4792]: E0216 22:11:23.027986 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:11:31 crc kubenswrapper[4792]: I0216 22:11:31.532033 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:11:31 crc kubenswrapper[4792]: I0216 22:11:31.532664 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:11:33 crc kubenswrapper[4792]: E0216 22:11:33.029548 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:11:37 crc kubenswrapper[4792]: I0216 22:11:37.045360 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27pr"] Feb 16 22:11:37 crc kubenswrapper[4792]: I0216 22:11:37.059280 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27pr"] Feb 16 22:11:38 crc kubenswrapper[4792]: E0216 22:11:38.041906 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:11:38 crc kubenswrapper[4792]: I0216 22:11:38.059734 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679ad2bc-eced-4c08-8c45-29b7e4f6c3f8" path="/var/lib/kubelet/pods/679ad2bc-eced-4c08-8c45-29b7e4f6c3f8/volumes" Feb 16 22:11:47 crc kubenswrapper[4792]: E0216 22:11:47.028284 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:11:52 crc kubenswrapper[4792]: E0216 22:11:52.028752 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:00 crc kubenswrapper[4792]: E0216 22:12:00.029262 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:12:00 crc kubenswrapper[4792]: I0216 22:12:00.441452 4792 scope.go:117] "RemoveContainer" containerID="50898a4098d7562b2ec8429f06197a8a64523f0b11cf5a58c14c86cf7254f9df" Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.532838 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.533178 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.533221 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.534341 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.534413 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42" gracePeriod=600 Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.896662 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42" exitCode=0 Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.896714 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42"} Feb 16 22:12:01 crc kubenswrapper[4792]: I0216 22:12:01.896760 4792 scope.go:117] "RemoveContainer" containerID="989a6c0281e0c5c3027ddfdcd376e6ddd8d7e02a9794efdaf61bd133f799b3ab" Feb 16 22:12:02 crc kubenswrapper[4792]: I0216 22:12:02.908167 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64"} Feb 16 22:12:03 crc kubenswrapper[4792]: E0216 22:12:03.028336 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:11 crc kubenswrapper[4792]: E0216 22:12:11.031656 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:12:14 crc kubenswrapper[4792]: E0216 22:12:14.029679 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:23 crc kubenswrapper[4792]: E0216 22:12:23.027858 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.690360 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.693844 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.722183 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.830146 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.830429 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.830948 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrpv\" (UniqueName: \"kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.933152 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrpv\" (UniqueName: \"kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.933263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.933388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.934010 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.934519 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:23 crc kubenswrapper[4792]: I0216 22:12:23.966470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrpv\" (UniqueName: \"kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv\") pod \"redhat-operators-gh9tp\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:24 crc kubenswrapper[4792]: I0216 22:12:24.059235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:24 crc kubenswrapper[4792]: I0216 22:12:24.558352 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:25 crc kubenswrapper[4792]: I0216 22:12:25.127517 4792 generic.go:334] "Generic (PLEG): container finished" podID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerID="3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e" exitCode=0 Feb 16 22:12:25 crc kubenswrapper[4792]: I0216 22:12:25.127750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerDied","Data":"3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e"} Feb 16 22:12:25 crc kubenswrapper[4792]: I0216 22:12:25.127773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerStarted","Data":"ce32c84e3377580138e424d09b13778d5bba6f2daefb234df452a12b52215aeb"} Feb 16 22:12:27 crc kubenswrapper[4792]: I0216 22:12:27.178972 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerStarted","Data":"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5"} Feb 16 22:12:29 crc kubenswrapper[4792]: E0216 22:12:29.028804 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:31 crc kubenswrapper[4792]: I0216 22:12:31.228857 4792 generic.go:334] "Generic (PLEG): container finished" podID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerID="d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5" exitCode=0 Feb 16 22:12:31 crc kubenswrapper[4792]: I0216 22:12:31.229353 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerDied","Data":"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5"} Feb 16 22:12:32 crc kubenswrapper[4792]: I0216 22:12:32.262283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerStarted","Data":"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163"} Feb 16 22:12:32 crc kubenswrapper[4792]: I0216 22:12:32.290395 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gh9tp" podStartSLOduration=2.750559544 podStartE2EDuration="9.290375109s" podCreationTimestamp="2026-02-16 22:12:23 +0000 UTC" firstStartedPulling="2026-02-16 22:12:25.12991787 +0000 UTC m=+2077.783196761" lastFinishedPulling="2026-02-16 22:12:31.669733435 +0000 UTC m=+2084.323012326" observedRunningTime="2026-02-16 22:12:32.285021763 +0000 UTC m=+2084.938300654" watchObservedRunningTime="2026-02-16 22:12:32.290375109 +0000 UTC m=+2084.943654000" Feb 16 22:12:34 crc kubenswrapper[4792]: I0216 22:12:34.059662 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:34 crc kubenswrapper[4792]: I0216 22:12:34.060003 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:35 crc kubenswrapper[4792]: I0216 22:12:35.124266 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gh9tp" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="registry-server" probeResult="failure" output=< Feb 16 22:12:35 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:12:35 crc kubenswrapper[4792]: > Feb 16 22:12:36 crc kubenswrapper[4792]: E0216 22:12:36.028683 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:12:43 crc kubenswrapper[4792]: E0216 22:12:43.027557 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:44 crc kubenswrapper[4792]: I0216 22:12:44.108588 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:44 crc kubenswrapper[4792]: I0216 22:12:44.188173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:44 crc kubenswrapper[4792]: I0216 22:12:44.352818 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:45 crc kubenswrapper[4792]: I0216 22:12:45.400796 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gh9tp" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="registry-server" containerID="cri-o://c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163" gracePeriod=2 Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.063709 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.096685 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities\") pod \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.096773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content\") pod \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.096825 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrpv\" (UniqueName: \"kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv\") pod \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\" (UID: \"78adf81b-fc7a-4fdf-9609-c4714d1b18e2\") " Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.098065 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities" (OuterVolumeSpecName: "utilities") pod "78adf81b-fc7a-4fdf-9609-c4714d1b18e2" (UID: "78adf81b-fc7a-4fdf-9609-c4714d1b18e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.107831 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv" (OuterVolumeSpecName: "kube-api-access-sjrpv") pod "78adf81b-fc7a-4fdf-9609-c4714d1b18e2" (UID: "78adf81b-fc7a-4fdf-9609-c4714d1b18e2"). InnerVolumeSpecName "kube-api-access-sjrpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.199636 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.199672 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjrpv\" (UniqueName: \"kubernetes.io/projected/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-kube-api-access-sjrpv\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.240375 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78adf81b-fc7a-4fdf-9609-c4714d1b18e2" (UID: "78adf81b-fc7a-4fdf-9609-c4714d1b18e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.302061 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78adf81b-fc7a-4fdf-9609-c4714d1b18e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.413771 4792 generic.go:334] "Generic (PLEG): container finished" podID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerID="c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163" exitCode=0 Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.413817 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerDied","Data":"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163"} Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.413844 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh9tp" event={"ID":"78adf81b-fc7a-4fdf-9609-c4714d1b18e2","Type":"ContainerDied","Data":"ce32c84e3377580138e424d09b13778d5bba6f2daefb234df452a12b52215aeb"} Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.413862 4792 scope.go:117] "RemoveContainer" containerID="c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.413880 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh9tp" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.447476 4792 scope.go:117] "RemoveContainer" containerID="d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.455724 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.467359 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gh9tp"] Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.477450 4792 scope.go:117] "RemoveContainer" containerID="3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.531515 4792 scope.go:117] "RemoveContainer" containerID="c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163" Feb 16 22:12:46 crc kubenswrapper[4792]: E0216 22:12:46.532352 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163\": container with ID starting with c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163 not found: ID does not exist" containerID="c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.532382 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163"} err="failed to get container status \"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163\": rpc error: code = NotFound desc = could not find container \"c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163\": container with ID starting with c6e899e4e6286057ab8d03d6a29ca50ca9f76951a9e5ddb29959dc563be9a163 not found: ID does not exist" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.532402 4792 scope.go:117] "RemoveContainer" containerID="d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5" Feb 16 22:12:46 crc kubenswrapper[4792]: E0216 22:12:46.532885 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5\": container with ID starting with d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5 not found: ID does not exist" containerID="d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.532912 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5"} err="failed to get container status \"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5\": rpc error: code = NotFound desc = could not find container \"d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5\": container with ID starting with d019b4f6a953943d43c5fce2d81da7ebd250005f661768ab4a764201561adae5 not found: ID does not exist" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.532952 4792 scope.go:117] "RemoveContainer" containerID="3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e" Feb 16 22:12:46 crc kubenswrapper[4792]: E0216 22:12:46.533217 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e\": container with ID starting with 3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e not found: ID does not exist" containerID="3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e" Feb 16 22:12:46 crc kubenswrapper[4792]: I0216 22:12:46.533244 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e"} err="failed to get container status \"3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e\": rpc error: code = NotFound desc = could not find container \"3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e\": container with ID starting with 3b588d63f15b9115b35114b87705c2e604f743674d0aa8eebad2077dc15bc61e not found: ID does not exist" Feb 16 22:12:47 crc kubenswrapper[4792]: E0216 22:12:47.029072 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:12:48 crc kubenswrapper[4792]: I0216 22:12:48.046648 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" path="/var/lib/kubelet/pods/78adf81b-fc7a-4fdf-9609-c4714d1b18e2/volumes" Feb 16 22:12:56 crc kubenswrapper[4792]: E0216 22:12:56.029474 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:12:59 crc kubenswrapper[4792]: E0216 22:12:59.028946 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:13:10 crc kubenswrapper[4792]: E0216 22:13:10.028437 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:13:10 crc kubenswrapper[4792]: E0216 22:13:10.028540 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:13:23 crc kubenswrapper[4792]: E0216 22:13:23.030832 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:13:23 crc kubenswrapper[4792]: E0216 22:13:23.031115 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:13:34 crc kubenswrapper[4792]: E0216 22:13:34.029350 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:13:38 crc kubenswrapper[4792]: E0216 22:13:38.034812 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:13:47 crc kubenswrapper[4792]: I0216 22:13:47.029620 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:13:47 crc kubenswrapper[4792]: E0216 22:13:47.151589 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:13:47 crc kubenswrapper[4792]: E0216 22:13:47.151685 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:13:47 crc kubenswrapper[4792]: E0216 22:13:47.151889 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:13:47 crc kubenswrapper[4792]: E0216 22:13:47.153238 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:13:50 crc kubenswrapper[4792]: E0216 22:13:50.107845 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:13:50 crc kubenswrapper[4792]: E0216 22:13:50.108722 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:13:50 crc kubenswrapper[4792]: E0216 22:13:50.108881 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:13:50 crc kubenswrapper[4792]: E0216 22:13:50.110084 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:13:55 crc kubenswrapper[4792]: I0216 22:13:55.141245 4792 generic.go:334] "Generic (PLEG): container finished" podID="65f41687-f567-41a0-8ec2-3ac03e464ebe" containerID="73df445c8525ad599c27870a5217886f28fb547e00d62aabea95b6c850ea3308" exitCode=2 Feb 16 22:13:55 crc kubenswrapper[4792]: I0216 22:13:55.141408 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" event={"ID":"65f41687-f567-41a0-8ec2-3ac03e464ebe","Type":"ContainerDied","Data":"73df445c8525ad599c27870a5217886f28fb547e00d62aabea95b6c850ea3308"} Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.797084 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.931579 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory\") pod \"65f41687-f567-41a0-8ec2-3ac03e464ebe\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.931868 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam\") pod \"65f41687-f567-41a0-8ec2-3ac03e464ebe\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.932004 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txk2l\" (UniqueName: \"kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l\") pod \"65f41687-f567-41a0-8ec2-3ac03e464ebe\" (UID: \"65f41687-f567-41a0-8ec2-3ac03e464ebe\") " Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.937106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l" (OuterVolumeSpecName: "kube-api-access-txk2l") pod "65f41687-f567-41a0-8ec2-3ac03e464ebe" (UID: "65f41687-f567-41a0-8ec2-3ac03e464ebe"). InnerVolumeSpecName "kube-api-access-txk2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.963636 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory" (OuterVolumeSpecName: "inventory") pod "65f41687-f567-41a0-8ec2-3ac03e464ebe" (UID: "65f41687-f567-41a0-8ec2-3ac03e464ebe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:13:56 crc kubenswrapper[4792]: I0216 22:13:56.964125 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65f41687-f567-41a0-8ec2-3ac03e464ebe" (UID: "65f41687-f567-41a0-8ec2-3ac03e464ebe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.035100 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txk2l\" (UniqueName: \"kubernetes.io/projected/65f41687-f567-41a0-8ec2-3ac03e464ebe-kube-api-access-txk2l\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.035134 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.035145 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65f41687-f567-41a0-8ec2-3ac03e464ebe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.170328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" event={"ID":"65f41687-f567-41a0-8ec2-3ac03e464ebe","Type":"ContainerDied","Data":"d478d4f3f37ea5166d7ac01f374d236d8bdf6da73069fcf52ac64fcabd6e1aa6"} Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.170372 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d478d4f3f37ea5166d7ac01f374d236d8bdf6da73069fcf52ac64fcabd6e1aa6" Feb 16 22:13:57 crc kubenswrapper[4792]: I0216 22:13:57.170376 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp" Feb 16 22:13:59 crc kubenswrapper[4792]: E0216 22:13:59.028328 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:14:01 crc kubenswrapper[4792]: I0216 22:14:01.531912 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:14:01 crc kubenswrapper[4792]: I0216 22:14:01.532301 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.040170 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g"] Feb 16 22:14:04 crc kubenswrapper[4792]: E0216 22:14:04.041057 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="extract-content" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041078 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="extract-content" Feb 16 22:14:04 crc kubenswrapper[4792]: E0216 22:14:04.041101 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="extract-utilities" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041113 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="extract-utilities" Feb 16 22:14:04 crc kubenswrapper[4792]: E0216 22:14:04.041135 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="registry-server" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041147 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="registry-server" Feb 16 22:14:04 crc kubenswrapper[4792]: E0216 22:14:04.041172 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f41687-f567-41a0-8ec2-3ac03e464ebe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041184 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f41687-f567-41a0-8ec2-3ac03e464ebe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041551 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f41687-f567-41a0-8ec2-3ac03e464ebe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.041628 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="78adf81b-fc7a-4fdf-9609-c4714d1b18e2" containerName="registry-server" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.042994 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.046530 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g"] Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.060113 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.060237 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.060500 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.060825 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.103612 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.103703 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxb6\" (UniqueName: \"kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.103736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.205522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxb6\" (UniqueName: \"kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.205887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.206063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.206731 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.209667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.224876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.234357 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.234802 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.240394 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxb6\" (UniqueName: \"kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.307856 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.308040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.308210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875sl\" (UniqueName: \"kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.375382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.410202 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.410383 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-875sl\" (UniqueName: \"kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.410420 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.410962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.411025 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.428735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-875sl\" (UniqueName: \"kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl\") pod \"redhat-marketplace-sxx8b\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:04 crc kubenswrapper[4792]: I0216 22:14:04.649762 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:05 crc kubenswrapper[4792]: E0216 22:14:05.028669 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:14:05 crc kubenswrapper[4792]: I0216 22:14:05.097157 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g"] Feb 16 22:14:05 crc kubenswrapper[4792]: I0216 22:14:05.209280 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:05 crc kubenswrapper[4792]: I0216 22:14:05.323711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerStarted","Data":"323c2fcfeec0869110e3f1c76a45b9efa5a3a7760af20a2097a22a05b560ac28"} Feb 16 22:14:05 crc kubenswrapper[4792]: I0216 22:14:05.326129 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" event={"ID":"79c18359-29ae-4f68-aee4-ada05c949dfd","Type":"ContainerStarted","Data":"9e8e6bff210f7b9e561dc5e49dcce681e3cc941a691896dd0d5ed0c39335a865"} Feb 16 22:14:06 crc kubenswrapper[4792]: I0216 22:14:06.336925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" event={"ID":"79c18359-29ae-4f68-aee4-ada05c949dfd","Type":"ContainerStarted","Data":"8e79a539ef772bdd4504627bbc5ff363b2d61644b5b0e11dfce1af7e94c9105c"} Feb 16 22:14:06 crc kubenswrapper[4792]: I0216 22:14:06.339867 4792 generic.go:334] "Generic (PLEG): container finished" podID="6242147b-68ef-48d4-b0f4-177e61585372" containerID="afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee" exitCode=0 Feb 16 22:14:06 crc kubenswrapper[4792]: I0216 22:14:06.339925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerDied","Data":"afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee"} Feb 16 22:14:06 crc kubenswrapper[4792]: I0216 22:14:06.359083 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" podStartSLOduration=1.8632604640000001 podStartE2EDuration="2.359063327s" podCreationTimestamp="2026-02-16 22:14:04 +0000 UTC" firstStartedPulling="2026-02-16 22:14:05.094732085 +0000 UTC m=+2177.748010976" lastFinishedPulling="2026-02-16 22:14:05.590534938 +0000 UTC m=+2178.243813839" observedRunningTime="2026-02-16 22:14:06.353619409 +0000 UTC m=+2179.006898320" watchObservedRunningTime="2026-02-16 22:14:06.359063327 +0000 UTC m=+2179.012342208" Feb 16 22:14:08 crc kubenswrapper[4792]: I0216 22:14:08.364175 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerStarted","Data":"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f"} Feb 16 22:14:09 crc kubenswrapper[4792]: I0216 22:14:09.375410 4792 generic.go:334] "Generic (PLEG): container finished" podID="6242147b-68ef-48d4-b0f4-177e61585372" containerID="68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f" exitCode=0 Feb 16 22:14:09 crc kubenswrapper[4792]: I0216 22:14:09.375457 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerDied","Data":"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f"} Feb 16 22:14:10 crc kubenswrapper[4792]: I0216 22:14:10.388086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerStarted","Data":"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77"} Feb 16 22:14:12 crc kubenswrapper[4792]: E0216 22:14:12.028817 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:14:12 crc kubenswrapper[4792]: I0216 22:14:12.052682 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxx8b" podStartSLOduration=4.614071561 podStartE2EDuration="8.052665334s" podCreationTimestamp="2026-02-16 22:14:04 +0000 UTC" firstStartedPulling="2026-02-16 22:14:06.34150806 +0000 UTC m=+2178.994786951" lastFinishedPulling="2026-02-16 22:14:09.780101833 +0000 UTC m=+2182.433380724" observedRunningTime="2026-02-16 22:14:10.411325274 +0000 UTC m=+2183.064604185" watchObservedRunningTime="2026-02-16 22:14:12.052665334 +0000 UTC m=+2184.705944225" Feb 16 22:14:14 crc kubenswrapper[4792]: I0216 22:14:14.651759 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:14 crc kubenswrapper[4792]: I0216 22:14:14.652754 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:14 crc kubenswrapper[4792]: I0216 22:14:14.705700 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:15 crc kubenswrapper[4792]: I0216 22:14:15.527527 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:15 crc kubenswrapper[4792]: I0216 22:14:15.583948 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:17 crc kubenswrapper[4792]: E0216 22:14:17.028837 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:14:17 crc kubenswrapper[4792]: I0216 22:14:17.490804 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxx8b" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="registry-server" containerID="cri-o://d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77" gracePeriod=2 Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.188062 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.270913 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-875sl\" (UniqueName: \"kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl\") pod \"6242147b-68ef-48d4-b0f4-177e61585372\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.271030 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities\") pod \"6242147b-68ef-48d4-b0f4-177e61585372\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.271378 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content\") pod \"6242147b-68ef-48d4-b0f4-177e61585372\" (UID: \"6242147b-68ef-48d4-b0f4-177e61585372\") " Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.273182 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities" (OuterVolumeSpecName: "utilities") pod "6242147b-68ef-48d4-b0f4-177e61585372" (UID: "6242147b-68ef-48d4-b0f4-177e61585372"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.279397 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl" (OuterVolumeSpecName: "kube-api-access-875sl") pod "6242147b-68ef-48d4-b0f4-177e61585372" (UID: "6242147b-68ef-48d4-b0f4-177e61585372"). InnerVolumeSpecName "kube-api-access-875sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.374835 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-875sl\" (UniqueName: \"kubernetes.io/projected/6242147b-68ef-48d4-b0f4-177e61585372-kube-api-access-875sl\") on node \"crc\" DevicePath \"\"" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.374867 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.449352 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6242147b-68ef-48d4-b0f4-177e61585372" (UID: "6242147b-68ef-48d4-b0f4-177e61585372"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.477366 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6242147b-68ef-48d4-b0f4-177e61585372-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.503444 4792 generic.go:334] "Generic (PLEG): container finished" podID="6242147b-68ef-48d4-b0f4-177e61585372" containerID="d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77" exitCode=0 Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.503503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerDied","Data":"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77"} Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.503563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxx8b" event={"ID":"6242147b-68ef-48d4-b0f4-177e61585372","Type":"ContainerDied","Data":"323c2fcfeec0869110e3f1c76a45b9efa5a3a7760af20a2097a22a05b560ac28"} Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.503584 4792 scope.go:117] "RemoveContainer" containerID="d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.503525 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxx8b" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.551352 4792 scope.go:117] "RemoveContainer" containerID="68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.567653 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.580947 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxx8b"] Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.581833 4792 scope.go:117] "RemoveContainer" containerID="afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.658635 4792 scope.go:117] "RemoveContainer" containerID="d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77" Feb 16 22:14:18 crc kubenswrapper[4792]: E0216 22:14:18.659087 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77\": container with ID starting with d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77 not found: ID does not exist" containerID="d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.659144 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77"} err="failed to get container status \"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77\": rpc error: code = NotFound desc = could not find container \"d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77\": container with ID starting with d910b81ae105bb54f24c011ad9c81097e196c67d6ade939c190a7bb59a592d77 not found: ID does not exist" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.659173 4792 scope.go:117] "RemoveContainer" containerID="68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f" Feb 16 22:14:18 crc kubenswrapper[4792]: E0216 22:14:18.659569 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f\": container with ID starting with 68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f not found: ID does not exist" containerID="68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.659620 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f"} err="failed to get container status \"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f\": rpc error: code = NotFound desc = could not find container \"68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f\": container with ID starting with 68221675c3eb840f47c8d1e57fbc9bad2a3593d9c97f5e3401f216c98edf741f not found: ID does not exist" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.659653 4792 scope.go:117] "RemoveContainer" containerID="afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee" Feb 16 22:14:18 crc kubenswrapper[4792]: E0216 22:14:18.659903 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee\": container with ID starting with afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee not found: ID does not exist" containerID="afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee" Feb 16 22:14:18 crc kubenswrapper[4792]: I0216 22:14:18.659937 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee"} err="failed to get container status \"afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee\": rpc error: code = NotFound desc = could not find container \"afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee\": container with ID starting with afff53e44ad30a51247ef6acad30389aa977fc6125c37e7994e9bd3b9827c2ee not found: ID does not exist" Feb 16 22:14:20 crc kubenswrapper[4792]: I0216 22:14:20.037686 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6242147b-68ef-48d4-b0f4-177e61585372" path="/var/lib/kubelet/pods/6242147b-68ef-48d4-b0f4-177e61585372/volumes" Feb 16 22:14:23 crc kubenswrapper[4792]: E0216 22:14:23.029271 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:14:28 crc kubenswrapper[4792]: E0216 22:14:28.034776 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:14:31 crc kubenswrapper[4792]: I0216 22:14:31.532268 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:14:31 crc kubenswrapper[4792]: I0216 22:14:31.532856 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:14:36 crc kubenswrapper[4792]: E0216 22:14:36.031010 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:14:43 crc kubenswrapper[4792]: E0216 22:14:43.027745 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:14:49 crc kubenswrapper[4792]: E0216 22:14:49.029054 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:14:54 crc kubenswrapper[4792]: E0216 22:14:54.035249 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.168663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt"] Feb 16 22:15:00 crc kubenswrapper[4792]: E0216 22:15:00.169987 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="extract-content" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.170010 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="extract-content" Feb 16 22:15:00 crc kubenswrapper[4792]: E0216 22:15:00.170085 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="extract-utilities" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.170096 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="extract-utilities" Feb 16 22:15:00 crc kubenswrapper[4792]: E0216 22:15:00.170129 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="registry-server" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.170140 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="registry-server" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.170464 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6242147b-68ef-48d4-b0f4-177e61585372" containerName="registry-server" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.171804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.175756 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.177502 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.181113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt"] Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.340218 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fbl\" (UniqueName: \"kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.340704 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.340731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.443680 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fbl\" (UniqueName: \"kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.443936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.443970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.445515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.455480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.461246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fbl\" (UniqueName: \"kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl\") pod \"collect-profiles-29521335-fb8pt\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.492737 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:00 crc kubenswrapper[4792]: I0216 22:15:00.989998 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt"] Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.085341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" event={"ID":"42ce140f-735e-4460-a10b-4d383cbf8fbf","Type":"ContainerStarted","Data":"003a8c6ccb1f05ba771ede59d84a6f7a591d2c1d8a155a7a0530bf3a378307a1"} Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.532243 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.533347 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.533469 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.534463 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:15:01 crc kubenswrapper[4792]: I0216 22:15:01.534630 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" gracePeriod=600 Feb 16 22:15:01 crc kubenswrapper[4792]: E0216 22:15:01.658164 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:02 crc kubenswrapper[4792]: E0216 22:15:02.028343 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.095111 4792 generic.go:334] "Generic (PLEG): container finished" podID="42ce140f-735e-4460-a10b-4d383cbf8fbf" containerID="81a6f328498bd9a8b48935cd4774a8c89d4cf90ac2946b665aa3bd46c7e71885" exitCode=0 Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.095177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" event={"ID":"42ce140f-735e-4460-a10b-4d383cbf8fbf","Type":"ContainerDied","Data":"81a6f328498bd9a8b48935cd4774a8c89d4cf90ac2946b665aa3bd46c7e71885"} Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.099586 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" exitCode=0 Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.099629 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64"} Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.099664 4792 scope.go:117] "RemoveContainer" containerID="daf5930ff5f44c9845691dae66dcecdc2ad5ee5d92ad34ff86ceda8750297a42" Feb 16 22:15:02 crc kubenswrapper[4792]: I0216 22:15:02.101086 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:15:02 crc kubenswrapper[4792]: E0216 22:15:02.101709 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.525677 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.618802 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68fbl\" (UniqueName: \"kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl\") pod \"42ce140f-735e-4460-a10b-4d383cbf8fbf\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.619170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume\") pod \"42ce140f-735e-4460-a10b-4d383cbf8fbf\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.619223 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume\") pod \"42ce140f-735e-4460-a10b-4d383cbf8fbf\" (UID: \"42ce140f-735e-4460-a10b-4d383cbf8fbf\") " Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.620648 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "42ce140f-735e-4460-a10b-4d383cbf8fbf" (UID: "42ce140f-735e-4460-a10b-4d383cbf8fbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.630013 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42ce140f-735e-4460-a10b-4d383cbf8fbf" (UID: "42ce140f-735e-4460-a10b-4d383cbf8fbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.633054 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl" (OuterVolumeSpecName: "kube-api-access-68fbl") pod "42ce140f-735e-4460-a10b-4d383cbf8fbf" (UID: "42ce140f-735e-4460-a10b-4d383cbf8fbf"). InnerVolumeSpecName "kube-api-access-68fbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.722494 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ce140f-735e-4460-a10b-4d383cbf8fbf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.722547 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68fbl\" (UniqueName: \"kubernetes.io/projected/42ce140f-735e-4460-a10b-4d383cbf8fbf-kube-api-access-68fbl\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:03 crc kubenswrapper[4792]: I0216 22:15:03.722569 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42ce140f-735e-4460-a10b-4d383cbf8fbf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:15:04 crc kubenswrapper[4792]: I0216 22:15:04.124716 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" Feb 16 22:15:04 crc kubenswrapper[4792]: I0216 22:15:04.125354 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt" event={"ID":"42ce140f-735e-4460-a10b-4d383cbf8fbf","Type":"ContainerDied","Data":"003a8c6ccb1f05ba771ede59d84a6f7a591d2c1d8a155a7a0530bf3a378307a1"} Feb 16 22:15:04 crc kubenswrapper[4792]: I0216 22:15:04.125392 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003a8c6ccb1f05ba771ede59d84a6f7a591d2c1d8a155a7a0530bf3a378307a1" Feb 16 22:15:04 crc kubenswrapper[4792]: I0216 22:15:04.603409 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg"] Feb 16 22:15:04 crc kubenswrapper[4792]: I0216 22:15:04.614446 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-7nbqg"] Feb 16 22:15:05 crc kubenswrapper[4792]: E0216 22:15:05.030076 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:15:06 crc kubenswrapper[4792]: I0216 22:15:06.044301 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb51e3c-4f03-4e68-91fe-838816d8a376" path="/var/lib/kubelet/pods/2cb51e3c-4f03-4e68-91fe-838816d8a376/volumes" Feb 16 22:15:17 crc kubenswrapper[4792]: I0216 22:15:17.028080 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:15:17 crc kubenswrapper[4792]: E0216 22:15:17.029290 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:17 crc kubenswrapper[4792]: E0216 22:15:17.032439 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:15:18 crc kubenswrapper[4792]: E0216 22:15:18.043693 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:15:30 crc kubenswrapper[4792]: E0216 22:15:30.029133 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:15:32 crc kubenswrapper[4792]: I0216 22:15:32.026797 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:15:32 crc kubenswrapper[4792]: E0216 22:15:32.027633 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:33 crc kubenswrapper[4792]: E0216 22:15:33.029562 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:15:42 crc kubenswrapper[4792]: E0216 22:15:42.028707 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:15:43 crc kubenswrapper[4792]: I0216 22:15:43.027860 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:15:43 crc kubenswrapper[4792]: E0216 22:15:43.029081 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:47 crc kubenswrapper[4792]: E0216 22:15:47.030351 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:15:47 crc kubenswrapper[4792]: I0216 22:15:47.925735 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:15:47 crc kubenswrapper[4792]: E0216 22:15:47.926389 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ce140f-735e-4460-a10b-4d383cbf8fbf" containerName="collect-profiles" Feb 16 22:15:47 crc kubenswrapper[4792]: I0216 22:15:47.926411 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ce140f-735e-4460-a10b-4d383cbf8fbf" containerName="collect-profiles" Feb 16 22:15:47 crc kubenswrapper[4792]: I0216 22:15:47.926664 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ce140f-735e-4460-a10b-4d383cbf8fbf" containerName="collect-profiles" Feb 16 22:15:47 crc kubenswrapper[4792]: I0216 22:15:47.928370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:47 crc kubenswrapper[4792]: I0216 22:15:47.938252 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.064332 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55th\" (UniqueName: \"kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.065098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.065252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.167560 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.167827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55th\" (UniqueName: \"kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.168004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.168415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.168452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.188551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55th\" (UniqueName: \"kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th\") pod \"community-operators-vqsld\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.267317 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:48 crc kubenswrapper[4792]: I0216 22:15:48.896112 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:15:49 crc kubenswrapper[4792]: I0216 22:15:49.711643 4792 generic.go:334] "Generic (PLEG): container finished" podID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerID="b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88" exitCode=0 Feb 16 22:15:49 crc kubenswrapper[4792]: I0216 22:15:49.711761 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerDied","Data":"b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88"} Feb 16 22:15:49 crc kubenswrapper[4792]: I0216 22:15:49.712072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerStarted","Data":"a95ecdb3bc5a82e05d670f13f702b5e163d89c6829bfbc163b560921bf6eda93"} Feb 16 22:15:52 crc kubenswrapper[4792]: I0216 22:15:52.748491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerStarted","Data":"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716"} Feb 16 22:15:53 crc kubenswrapper[4792]: I0216 22:15:53.767830 4792 generic.go:334] "Generic (PLEG): container finished" podID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerID="390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716" exitCode=0 Feb 16 22:15:53 crc kubenswrapper[4792]: I0216 22:15:53.767944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerDied","Data":"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716"} Feb 16 22:15:54 crc kubenswrapper[4792]: E0216 22:15:54.028701 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:15:54 crc kubenswrapper[4792]: I0216 22:15:54.780240 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerStarted","Data":"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3"} Feb 16 22:15:54 crc kubenswrapper[4792]: I0216 22:15:54.802184 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vqsld" podStartSLOduration=3.354992184 podStartE2EDuration="7.802164905s" podCreationTimestamp="2026-02-16 22:15:47 +0000 UTC" firstStartedPulling="2026-02-16 22:15:49.713824225 +0000 UTC m=+2282.367103116" lastFinishedPulling="2026-02-16 22:15:54.160996936 +0000 UTC m=+2286.814275837" observedRunningTime="2026-02-16 22:15:54.800017136 +0000 UTC m=+2287.453296047" watchObservedRunningTime="2026-02-16 22:15:54.802164905 +0000 UTC m=+2287.455443806" Feb 16 22:15:57 crc kubenswrapper[4792]: I0216 22:15:57.026928 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:15:57 crc kubenswrapper[4792]: E0216 22:15:57.027552 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:15:58 crc kubenswrapper[4792]: I0216 22:15:58.267754 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:58 crc kubenswrapper[4792]: I0216 22:15:58.268966 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:58 crc kubenswrapper[4792]: I0216 22:15:58.320455 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:15:59 crc kubenswrapper[4792]: E0216 22:15:59.029893 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:16:00 crc kubenswrapper[4792]: I0216 22:16:00.668053 4792 scope.go:117] "RemoveContainer" containerID="f086701fa286eadcf38da7cf233dcbf9422a79a77be07bbf003ddaf47565f56f" Feb 16 22:16:08 crc kubenswrapper[4792]: E0216 22:16:08.041901 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:16:08 crc kubenswrapper[4792]: I0216 22:16:08.313239 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:16:08 crc kubenswrapper[4792]: I0216 22:16:08.385292 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:16:08 crc kubenswrapper[4792]: I0216 22:16:08.948709 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vqsld" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="registry-server" containerID="cri-o://c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3" gracePeriod=2 Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.503373 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.547758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55th\" (UniqueName: \"kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th\") pod \"65a2f16a-6826-495b-aa22-ec4b394828fe\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.548506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities\") pod \"65a2f16a-6826-495b-aa22-ec4b394828fe\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.548687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content\") pod \"65a2f16a-6826-495b-aa22-ec4b394828fe\" (UID: \"65a2f16a-6826-495b-aa22-ec4b394828fe\") " Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.549895 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities" (OuterVolumeSpecName: "utilities") pod "65a2f16a-6826-495b-aa22-ec4b394828fe" (UID: "65a2f16a-6826-495b-aa22-ec4b394828fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.562096 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th" (OuterVolumeSpecName: "kube-api-access-r55th") pod "65a2f16a-6826-495b-aa22-ec4b394828fe" (UID: "65a2f16a-6826-495b-aa22-ec4b394828fe"). InnerVolumeSpecName "kube-api-access-r55th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.590513 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65a2f16a-6826-495b-aa22-ec4b394828fe" (UID: "65a2f16a-6826-495b-aa22-ec4b394828fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.651173 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55th\" (UniqueName: \"kubernetes.io/projected/65a2f16a-6826-495b-aa22-ec4b394828fe-kube-api-access-r55th\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.651211 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.651225 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a2f16a-6826-495b-aa22-ec4b394828fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.962292 4792 generic.go:334] "Generic (PLEG): container finished" podID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerID="c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3" exitCode=0 Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.962334 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerDied","Data":"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3"} Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.962361 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqsld" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.962381 4792 scope.go:117] "RemoveContainer" containerID="c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3" Feb 16 22:16:09 crc kubenswrapper[4792]: I0216 22:16:09.962369 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqsld" event={"ID":"65a2f16a-6826-495b-aa22-ec4b394828fe","Type":"ContainerDied","Data":"a95ecdb3bc5a82e05d670f13f702b5e163d89c6829bfbc163b560921bf6eda93"} Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.013120 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.014994 4792 scope.go:117] "RemoveContainer" containerID="390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.033498 4792 scope.go:117] "RemoveContainer" containerID="b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.038398 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vqsld"] Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.093146 4792 scope.go:117] "RemoveContainer" containerID="c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3" Feb 16 22:16:10 crc kubenswrapper[4792]: E0216 22:16:10.093755 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3\": container with ID starting with c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3 not found: ID does not exist" containerID="c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.093830 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3"} err="failed to get container status \"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3\": rpc error: code = NotFound desc = could not find container \"c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3\": container with ID starting with c942a5334014b1e07f8f7587750c68047d1185254a93c2ca92b79e87c2244af3 not found: ID does not exist" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.093870 4792 scope.go:117] "RemoveContainer" containerID="390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716" Feb 16 22:16:10 crc kubenswrapper[4792]: E0216 22:16:10.094271 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716\": container with ID starting with 390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716 not found: ID does not exist" containerID="390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.094298 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716"} err="failed to get container status \"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716\": rpc error: code = NotFound desc = could not find container \"390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716\": container with ID starting with 390dfe11f42b8b3fbabe232dc31410fda82c80755a496e5c4b4f130e4a57f716 not found: ID does not exist" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.094318 4792 scope.go:117] "RemoveContainer" containerID="b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88" Feb 16 22:16:10 crc kubenswrapper[4792]: E0216 22:16:10.094579 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88\": container with ID starting with b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88 not found: ID does not exist" containerID="b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88" Feb 16 22:16:10 crc kubenswrapper[4792]: I0216 22:16:10.094649 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88"} err="failed to get container status \"b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88\": rpc error: code = NotFound desc = could not find container \"b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88\": container with ID starting with b5f528300f282af1676fc9f0056e8a24e6dfe43d63cd8b25c7382d21d5f60f88 not found: ID does not exist" Feb 16 22:16:11 crc kubenswrapper[4792]: E0216 22:16:11.028754 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:16:12 crc kubenswrapper[4792]: I0216 22:16:12.027143 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:16:12 crc kubenswrapper[4792]: E0216 22:16:12.027997 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:16:12 crc kubenswrapper[4792]: I0216 22:16:12.041489 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" path="/var/lib/kubelet/pods/65a2f16a-6826-495b-aa22-ec4b394828fe/volumes" Feb 16 22:16:22 crc kubenswrapper[4792]: E0216 22:16:22.028306 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:16:23 crc kubenswrapper[4792]: E0216 22:16:23.028120 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:16:27 crc kubenswrapper[4792]: I0216 22:16:27.026812 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:16:27 crc kubenswrapper[4792]: E0216 22:16:27.027697 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:16:36 crc kubenswrapper[4792]: E0216 22:16:36.045308 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:16:38 crc kubenswrapper[4792]: E0216 22:16:38.034379 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:16:39 crc kubenswrapper[4792]: I0216 22:16:39.026768 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:16:39 crc kubenswrapper[4792]: E0216 22:16:39.027134 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:16:50 crc kubenswrapper[4792]: E0216 22:16:50.028773 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:16:51 crc kubenswrapper[4792]: I0216 22:16:51.026841 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:16:51 crc kubenswrapper[4792]: E0216 22:16:51.027708 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:16:53 crc kubenswrapper[4792]: E0216 22:16:53.028526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:03 crc kubenswrapper[4792]: I0216 22:17:03.026540 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:17:03 crc kubenswrapper[4792]: E0216 22:17:03.027331 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:17:03 crc kubenswrapper[4792]: E0216 22:17:03.030994 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:17:05 crc kubenswrapper[4792]: E0216 22:17:05.028940 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:15 crc kubenswrapper[4792]: I0216 22:17:15.027028 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:17:15 crc kubenswrapper[4792]: E0216 22:17:15.027825 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:17:17 crc kubenswrapper[4792]: E0216 22:17:17.029206 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:17:20 crc kubenswrapper[4792]: E0216 22:17:20.027932 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:26 crc kubenswrapper[4792]: I0216 22:17:26.026131 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:17:26 crc kubenswrapper[4792]: E0216 22:17:26.026750 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:17:31 crc kubenswrapper[4792]: E0216 22:17:31.030924 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:17:32 crc kubenswrapper[4792]: E0216 22:17:32.028969 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:38 crc kubenswrapper[4792]: I0216 22:17:38.037329 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:17:38 crc kubenswrapper[4792]: E0216 22:17:38.038189 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:17:43 crc kubenswrapper[4792]: E0216 22:17:43.028507 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:43 crc kubenswrapper[4792]: E0216 22:17:43.028644 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.609767 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:17:47 crc kubenswrapper[4792]: E0216 22:17:47.610627 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="extract-content" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.610643 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="extract-content" Feb 16 22:17:47 crc kubenswrapper[4792]: E0216 22:17:47.610655 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="registry-server" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.610662 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="registry-server" Feb 16 22:17:47 crc kubenswrapper[4792]: E0216 22:17:47.610685 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="extract-utilities" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.610692 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="extract-utilities" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.610951 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2f16a-6826-495b-aa22-ec4b394828fe" containerName="registry-server" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.612849 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.629061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.785571 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.785706 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.785798 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5v7\" (UniqueName: \"kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.888515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.888669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl5v7\" (UniqueName: \"kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.888937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.889152 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.889422 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.917389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl5v7\" (UniqueName: \"kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7\") pod \"certified-operators-sszm9\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:47 crc kubenswrapper[4792]: I0216 22:17:47.938162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:48 crc kubenswrapper[4792]: I0216 22:17:48.514901 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:17:49 crc kubenswrapper[4792]: I0216 22:17:49.064387 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerID="447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811" exitCode=0 Feb 16 22:17:49 crc kubenswrapper[4792]: I0216 22:17:49.064886 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerDied","Data":"447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811"} Feb 16 22:17:49 crc kubenswrapper[4792]: I0216 22:17:49.064924 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerStarted","Data":"c7fff1f2739579e307370bdef91de4c5daff9f56d97b2fadca7d10102b84b132"} Feb 16 22:17:50 crc kubenswrapper[4792]: I0216 22:17:50.076083 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerStarted","Data":"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5"} Feb 16 22:17:52 crc kubenswrapper[4792]: I0216 22:17:52.102440 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerID="3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5" exitCode=0 Feb 16 22:17:52 crc kubenswrapper[4792]: I0216 22:17:52.102526 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerDied","Data":"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5"} Feb 16 22:17:53 crc kubenswrapper[4792]: I0216 22:17:53.026872 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:17:53 crc kubenswrapper[4792]: E0216 22:17:53.027484 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:17:53 crc kubenswrapper[4792]: I0216 22:17:53.115556 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerStarted","Data":"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab"} Feb 16 22:17:53 crc kubenswrapper[4792]: I0216 22:17:53.137891 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sszm9" podStartSLOduration=2.703555673 podStartE2EDuration="6.137865876s" podCreationTimestamp="2026-02-16 22:17:47 +0000 UTC" firstStartedPulling="2026-02-16 22:17:49.066722716 +0000 UTC m=+2401.720001607" lastFinishedPulling="2026-02-16 22:17:52.501032919 +0000 UTC m=+2405.154311810" observedRunningTime="2026-02-16 22:17:53.131200445 +0000 UTC m=+2405.784479336" watchObservedRunningTime="2026-02-16 22:17:53.137865876 +0000 UTC m=+2405.791144777" Feb 16 22:17:54 crc kubenswrapper[4792]: E0216 22:17:54.029544 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:17:55 crc kubenswrapper[4792]: E0216 22:17:55.027379 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:17:57 crc kubenswrapper[4792]: I0216 22:17:57.938985 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:57 crc kubenswrapper[4792]: I0216 22:17:57.939619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:17:59 crc kubenswrapper[4792]: I0216 22:17:59.002624 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sszm9" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="registry-server" probeResult="failure" output=< Feb 16 22:17:59 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:17:59 crc kubenswrapper[4792]: > Feb 16 22:18:04 crc kubenswrapper[4792]: I0216 22:18:04.028153 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:18:04 crc kubenswrapper[4792]: E0216 22:18:04.029466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:18:07 crc kubenswrapper[4792]: I0216 22:18:07.995099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:18:08 crc kubenswrapper[4792]: E0216 22:18:08.039552 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:18:08 crc kubenswrapper[4792]: I0216 22:18:08.051161 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:18:08 crc kubenswrapper[4792]: I0216 22:18:08.252842 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:18:09 crc kubenswrapper[4792]: E0216 22:18:09.027574 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.291412 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sszm9" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="registry-server" containerID="cri-o://ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab" gracePeriod=2 Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.821354 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.924773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content\") pod \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.924881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl5v7\" (UniqueName: \"kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7\") pod \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.925078 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities\") pod \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\" (UID: \"fc62d207-2fb9-4308-8e58-6e8e0a49490c\") " Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.926580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities" (OuterVolumeSpecName: "utilities") pod "fc62d207-2fb9-4308-8e58-6e8e0a49490c" (UID: "fc62d207-2fb9-4308-8e58-6e8e0a49490c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.935961 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7" (OuterVolumeSpecName: "kube-api-access-hl5v7") pod "fc62d207-2fb9-4308-8e58-6e8e0a49490c" (UID: "fc62d207-2fb9-4308-8e58-6e8e0a49490c"). InnerVolumeSpecName "kube-api-access-hl5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:18:09 crc kubenswrapper[4792]: I0216 22:18:09.981571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc62d207-2fb9-4308-8e58-6e8e0a49490c" (UID: "fc62d207-2fb9-4308-8e58-6e8e0a49490c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.027758 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.027819 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl5v7\" (UniqueName: \"kubernetes.io/projected/fc62d207-2fb9-4308-8e58-6e8e0a49490c-kube-api-access-hl5v7\") on node \"crc\" DevicePath \"\"" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.027834 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc62d207-2fb9-4308-8e58-6e8e0a49490c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.304885 4792 generic.go:334] "Generic (PLEG): container finished" podID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerID="ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab" exitCode=0 Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.304932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerDied","Data":"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab"} Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.304983 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sszm9" event={"ID":"fc62d207-2fb9-4308-8e58-6e8e0a49490c","Type":"ContainerDied","Data":"c7fff1f2739579e307370bdef91de4c5daff9f56d97b2fadca7d10102b84b132"} Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.304944 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sszm9" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.305006 4792 scope.go:117] "RemoveContainer" containerID="ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.332821 4792 scope.go:117] "RemoveContainer" containerID="3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.336081 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.357933 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sszm9"] Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.364964 4792 scope.go:117] "RemoveContainer" containerID="447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.414450 4792 scope.go:117] "RemoveContainer" containerID="ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab" Feb 16 22:18:10 crc kubenswrapper[4792]: E0216 22:18:10.415019 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab\": container with ID starting with ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab not found: ID does not exist" containerID="ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.415061 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab"} err="failed to get container status \"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab\": rpc error: code = NotFound desc = could not find container \"ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab\": container with ID starting with ba4779c93237f558ffbccec44043749326c07fcba6f87b278ebc8c79c0060aab not found: ID does not exist" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.415087 4792 scope.go:117] "RemoveContainer" containerID="3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5" Feb 16 22:18:10 crc kubenswrapper[4792]: E0216 22:18:10.415482 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5\": container with ID starting with 3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5 not found: ID does not exist" containerID="3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.415521 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5"} err="failed to get container status \"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5\": rpc error: code = NotFound desc = could not find container \"3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5\": container with ID starting with 3794e7d0f905ee08484a9bc6b33cb98b4c57f281f62c285a836cc49b9ba8e4a5 not found: ID does not exist" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.415571 4792 scope.go:117] "RemoveContainer" containerID="447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811" Feb 16 22:18:10 crc kubenswrapper[4792]: E0216 22:18:10.415984 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811\": container with ID starting with 447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811 not found: ID does not exist" containerID="447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811" Feb 16 22:18:10 crc kubenswrapper[4792]: I0216 22:18:10.416032 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811"} err="failed to get container status \"447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811\": rpc error: code = NotFound desc = could not find container \"447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811\": container with ID starting with 447673cf28c37a2023730b3e6156bd5303748d063ff934457c12e8c3ba8a2811 not found: ID does not exist" Feb 16 22:18:12 crc kubenswrapper[4792]: I0216 22:18:12.041435 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" path="/var/lib/kubelet/pods/fc62d207-2fb9-4308-8e58-6e8e0a49490c/volumes" Feb 16 22:18:16 crc kubenswrapper[4792]: I0216 22:18:16.027115 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:18:16 crc kubenswrapper[4792]: E0216 22:18:16.029264 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:18:19 crc kubenswrapper[4792]: E0216 22:18:19.028206 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:18:20 crc kubenswrapper[4792]: E0216 22:18:20.027913 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:18:30 crc kubenswrapper[4792]: I0216 22:18:30.026030 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:18:30 crc kubenswrapper[4792]: E0216 22:18:30.026862 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:18:30 crc kubenswrapper[4792]: E0216 22:18:30.028872 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:18:32 crc kubenswrapper[4792]: E0216 22:18:32.028344 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:18:41 crc kubenswrapper[4792]: I0216 22:18:41.026407 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:18:41 crc kubenswrapper[4792]: E0216 22:18:41.027211 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:18:44 crc kubenswrapper[4792]: E0216 22:18:44.028615 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:18:45 crc kubenswrapper[4792]: E0216 22:18:45.027211 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:18:54 crc kubenswrapper[4792]: I0216 22:18:54.027276 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:18:54 crc kubenswrapper[4792]: E0216 22:18:54.028175 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:18:56 crc kubenswrapper[4792]: I0216 22:18:56.029330 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:18:56 crc kubenswrapper[4792]: E0216 22:18:56.157717 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:18:56 crc kubenswrapper[4792]: E0216 22:18:56.158092 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:18:56 crc kubenswrapper[4792]: E0216 22:18:56.158306 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:18:56 crc kubenswrapper[4792]: E0216 22:18:56.159517 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:18:59 crc kubenswrapper[4792]: E0216 22:18:59.110935 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:18:59 crc kubenswrapper[4792]: E0216 22:18:59.111466 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:18:59 crc kubenswrapper[4792]: E0216 22:18:59.111613 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:18:59 crc kubenswrapper[4792]: E0216 22:18:59.112780 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:19:07 crc kubenswrapper[4792]: I0216 22:19:07.026558 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:19:07 crc kubenswrapper[4792]: E0216 22:19:07.027495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:19:11 crc kubenswrapper[4792]: E0216 22:19:11.029307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:19:14 crc kubenswrapper[4792]: E0216 22:19:14.028802 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:19:21 crc kubenswrapper[4792]: I0216 22:19:21.027066 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:19:21 crc kubenswrapper[4792]: E0216 22:19:21.027893 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:19:22 crc kubenswrapper[4792]: E0216 22:19:22.030270 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:19:27 crc kubenswrapper[4792]: E0216 22:19:27.030725 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:19:33 crc kubenswrapper[4792]: I0216 22:19:33.031691 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:19:33 crc kubenswrapper[4792]: E0216 22:19:33.033112 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:19:33 crc kubenswrapper[4792]: I0216 22:19:33.103146 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d7f78dd75-dlmv8" podUID="633c7466-7045-47d2-906d-0d9881501baa" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 22:19:34 crc kubenswrapper[4792]: E0216 22:19:34.031539 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:19:41 crc kubenswrapper[4792]: E0216 22:19:41.028659 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:19:44 crc kubenswrapper[4792]: I0216 22:19:44.026772 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:19:44 crc kubenswrapper[4792]: E0216 22:19:44.028069 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:19:47 crc kubenswrapper[4792]: E0216 22:19:47.029518 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:19:56 crc kubenswrapper[4792]: I0216 22:19:56.027130 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:19:56 crc kubenswrapper[4792]: E0216 22:19:56.028088 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:19:56 crc kubenswrapper[4792]: E0216 22:19:56.028122 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:20:02 crc kubenswrapper[4792]: E0216 22:20:02.029976 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:20:09 crc kubenswrapper[4792]: E0216 22:20:09.029838 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:20:11 crc kubenswrapper[4792]: I0216 22:20:11.026548 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:20:12 crc kubenswrapper[4792]: I0216 22:20:12.090752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f"} Feb 16 22:20:17 crc kubenswrapper[4792]: E0216 22:20:17.028124 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:20:19 crc kubenswrapper[4792]: I0216 22:20:19.168231 4792 generic.go:334] "Generic (PLEG): container finished" podID="79c18359-29ae-4f68-aee4-ada05c949dfd" containerID="8e79a539ef772bdd4504627bbc5ff363b2d61644b5b0e11dfce1af7e94c9105c" exitCode=2 Feb 16 22:20:19 crc kubenswrapper[4792]: I0216 22:20:19.168427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" event={"ID":"79c18359-29ae-4f68-aee4-ada05c949dfd","Type":"ContainerDied","Data":"8e79a539ef772bdd4504627bbc5ff363b2d61644b5b0e11dfce1af7e94c9105c"} Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.684200 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.785425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam\") pod \"79c18359-29ae-4f68-aee4-ada05c949dfd\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.785478 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory\") pod \"79c18359-29ae-4f68-aee4-ada05c949dfd\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.785558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzxb6\" (UniqueName: \"kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6\") pod \"79c18359-29ae-4f68-aee4-ada05c949dfd\" (UID: \"79c18359-29ae-4f68-aee4-ada05c949dfd\") " Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.791240 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6" (OuterVolumeSpecName: "kube-api-access-jzxb6") pod "79c18359-29ae-4f68-aee4-ada05c949dfd" (UID: "79c18359-29ae-4f68-aee4-ada05c949dfd"). InnerVolumeSpecName "kube-api-access-jzxb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.824290 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79c18359-29ae-4f68-aee4-ada05c949dfd" (UID: "79c18359-29ae-4f68-aee4-ada05c949dfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.826227 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory" (OuterVolumeSpecName: "inventory") pod "79c18359-29ae-4f68-aee4-ada05c949dfd" (UID: "79c18359-29ae-4f68-aee4-ada05c949dfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.889934 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.889967 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79c18359-29ae-4f68-aee4-ada05c949dfd-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:20 crc kubenswrapper[4792]: I0216 22:20:20.889977 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzxb6\" (UniqueName: \"kubernetes.io/projected/79c18359-29ae-4f68-aee4-ada05c949dfd-kube-api-access-jzxb6\") on node \"crc\" DevicePath \"\"" Feb 16 22:20:21 crc kubenswrapper[4792]: I0216 22:20:21.196260 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" event={"ID":"79c18359-29ae-4f68-aee4-ada05c949dfd","Type":"ContainerDied","Data":"9e8e6bff210f7b9e561dc5e49dcce681e3cc941a691896dd0d5ed0c39335a865"} Feb 16 22:20:21 crc kubenswrapper[4792]: I0216 22:20:21.196304 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8e6bff210f7b9e561dc5e49dcce681e3cc941a691896dd0d5ed0c39335a865" Feb 16 22:20:21 crc kubenswrapper[4792]: I0216 22:20:21.196391 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g" Feb 16 22:20:24 crc kubenswrapper[4792]: E0216 22:20:24.032257 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:20:32 crc kubenswrapper[4792]: E0216 22:20:32.030356 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:20:35 crc kubenswrapper[4792]: E0216 22:20:35.028511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.042506 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm"] Feb 16 22:20:38 crc kubenswrapper[4792]: E0216 22:20:38.043390 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="extract-content" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043407 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="extract-content" Feb 16 22:20:38 crc kubenswrapper[4792]: E0216 22:20:38.043451 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="registry-server" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043459 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="registry-server" Feb 16 22:20:38 crc kubenswrapper[4792]: E0216 22:20:38.043486 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c18359-29ae-4f68-aee4-ada05c949dfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043496 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c18359-29ae-4f68-aee4-ada05c949dfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:20:38 crc kubenswrapper[4792]: E0216 22:20:38.043516 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="extract-utilities" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043525 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="extract-utilities" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043830 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc62d207-2fb9-4308-8e58-6e8e0a49490c" containerName="registry-server" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.043846 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c18359-29ae-4f68-aee4-ada05c949dfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.044769 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.046001 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm"] Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.047057 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.047322 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.047521 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.050385 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.164842 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.165164 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.165368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.268228 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.268830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.268948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.275877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.276077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.288524 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:38 crc kubenswrapper[4792]: I0216 22:20:38.371257 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:20:39 crc kubenswrapper[4792]: I0216 22:20:39.552662 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm"] Feb 16 22:20:39 crc kubenswrapper[4792]: W0216 22:20:39.555788 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode792897f_1081_40d9_8e65_3f3ac21cd119.slice/crio-d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c WatchSource:0}: Error finding container d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c: Status 404 returned error can't find the container with id d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c Feb 16 22:20:40 crc kubenswrapper[4792]: I0216 22:20:40.438996 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" event={"ID":"e792897f-1081-40d9-8e65-3f3ac21cd119","Type":"ContainerStarted","Data":"8918ee66eaae21b6d5499fe9b23a32191b242b3ab03aed8d6e83c043bae5d8a9"} Feb 16 22:20:40 crc kubenswrapper[4792]: I0216 22:20:40.439319 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" event={"ID":"e792897f-1081-40d9-8e65-3f3ac21cd119","Type":"ContainerStarted","Data":"d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c"} Feb 16 22:20:40 crc kubenswrapper[4792]: I0216 22:20:40.465626 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" podStartSLOduration=1.895713934 podStartE2EDuration="2.465605878s" podCreationTimestamp="2026-02-16 22:20:38 +0000 UTC" firstStartedPulling="2026-02-16 22:20:39.559428079 +0000 UTC m=+2572.212706970" lastFinishedPulling="2026-02-16 22:20:40.129320023 +0000 UTC m=+2572.782598914" observedRunningTime="2026-02-16 22:20:40.46198322 +0000 UTC m=+2573.115262121" watchObservedRunningTime="2026-02-16 22:20:40.465605878 +0000 UTC m=+2573.118884779" Feb 16 22:20:46 crc kubenswrapper[4792]: E0216 22:20:46.028905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:20:51 crc kubenswrapper[4792]: E0216 22:20:51.029241 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:21:00 crc kubenswrapper[4792]: E0216 22:21:00.029911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:21:05 crc kubenswrapper[4792]: E0216 22:21:05.030188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:21:11 crc kubenswrapper[4792]: E0216 22:21:11.028895 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:21:16 crc kubenswrapper[4792]: E0216 22:21:16.030352 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:21:22 crc kubenswrapper[4792]: E0216 22:21:22.029318 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:21:30 crc kubenswrapper[4792]: E0216 22:21:30.029402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:21:35 crc kubenswrapper[4792]: E0216 22:21:35.031401 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:21:43 crc kubenswrapper[4792]: E0216 22:21:43.028583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:21:49 crc kubenswrapper[4792]: E0216 22:21:49.028973 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:21:57 crc kubenswrapper[4792]: E0216 22:21:57.028543 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:22:02 crc kubenswrapper[4792]: E0216 22:22:02.029290 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:22:12 crc kubenswrapper[4792]: E0216 22:22:12.032573 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:22:15 crc kubenswrapper[4792]: E0216 22:22:15.029108 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:22:24 crc kubenswrapper[4792]: E0216 22:22:24.032063 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:22:26 crc kubenswrapper[4792]: E0216 22:22:26.029855 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:22:31 crc kubenswrapper[4792]: I0216 22:22:31.535210 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:22:31 crc kubenswrapper[4792]: I0216 22:22:31.535649 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:22:39 crc kubenswrapper[4792]: E0216 22:22:39.028839 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:22:40 crc kubenswrapper[4792]: E0216 22:22:40.027297 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:22:52 crc kubenswrapper[4792]: E0216 22:22:52.029736 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:22:55 crc kubenswrapper[4792]: E0216 22:22:55.027882 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:23:01 crc kubenswrapper[4792]: I0216 22:23:01.532853 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:23:01 crc kubenswrapper[4792]: I0216 22:23:01.533290 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:23:06 crc kubenswrapper[4792]: E0216 22:23:06.029578 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:23:09 crc kubenswrapper[4792]: E0216 22:23:09.028589 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:23:17 crc kubenswrapper[4792]: E0216 22:23:17.028421 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:23:24 crc kubenswrapper[4792]: E0216 22:23:24.029230 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:23:31 crc kubenswrapper[4792]: E0216 22:23:31.028664 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:23:31 crc kubenswrapper[4792]: I0216 22:23:31.532984 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:23:31 crc kubenswrapper[4792]: I0216 22:23:31.533069 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:23:31 crc kubenswrapper[4792]: I0216 22:23:31.533120 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:23:31 crc kubenswrapper[4792]: I0216 22:23:31.534149 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:23:31 crc kubenswrapper[4792]: I0216 22:23:31.534225 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f" gracePeriod=600 Feb 16 22:23:32 crc kubenswrapper[4792]: I0216 22:23:32.331846 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f" exitCode=0 Feb 16 22:23:32 crc kubenswrapper[4792]: I0216 22:23:32.331923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f"} Feb 16 22:23:32 crc kubenswrapper[4792]: I0216 22:23:32.332557 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e"} Feb 16 22:23:32 crc kubenswrapper[4792]: I0216 22:23:32.332593 4792 scope.go:117] "RemoveContainer" containerID="5f36e25cdb3cd9c0164fa75c84a5a99a471cc2366d3dbbc6fe8aa9f506ca7b64" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.699643 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.702387 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.713482 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.896726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.897133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7czc\" (UniqueName: \"kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.897173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.999352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7czc\" (UniqueName: \"kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.999455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:35 crc kubenswrapper[4792]: I0216 22:23:35.999568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:36 crc kubenswrapper[4792]: I0216 22:23:36.000316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:36 crc kubenswrapper[4792]: I0216 22:23:36.000390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:36 crc kubenswrapper[4792]: I0216 22:23:36.025030 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7czc\" (UniqueName: \"kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc\") pod \"redhat-operators-8pwps\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:36 crc kubenswrapper[4792]: I0216 22:23:36.056932 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:36 crc kubenswrapper[4792]: W0216 22:23:36.606111 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd342a681_3890_4bbb_9e49_0c42895eccd3.slice/crio-f62975125f3ec0270914af40a4325e5654118908fc98770f6da3574954d441fd WatchSource:0}: Error finding container f62975125f3ec0270914af40a4325e5654118908fc98770f6da3574954d441fd: Status 404 returned error can't find the container with id f62975125f3ec0270914af40a4325e5654118908fc98770f6da3574954d441fd Feb 16 22:23:36 crc kubenswrapper[4792]: I0216 22:23:36.608939 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:37 crc kubenswrapper[4792]: I0216 22:23:37.382050 4792 generic.go:334] "Generic (PLEG): container finished" podID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerID="9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8" exitCode=0 Feb 16 22:23:37 crc kubenswrapper[4792]: I0216 22:23:37.382146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerDied","Data":"9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8"} Feb 16 22:23:37 crc kubenswrapper[4792]: I0216 22:23:37.382292 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerStarted","Data":"f62975125f3ec0270914af40a4325e5654118908fc98770f6da3574954d441fd"} Feb 16 22:23:38 crc kubenswrapper[4792]: I0216 22:23:38.398409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerStarted","Data":"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248"} Feb 16 22:23:39 crc kubenswrapper[4792]: E0216 22:23:39.028738 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:23:41 crc kubenswrapper[4792]: I0216 22:23:41.440388 4792 generic.go:334] "Generic (PLEG): container finished" podID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerID="5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248" exitCode=0 Feb 16 22:23:41 crc kubenswrapper[4792]: I0216 22:23:41.440509 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerDied","Data":"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248"} Feb 16 22:23:42 crc kubenswrapper[4792]: I0216 22:23:42.452335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerStarted","Data":"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a"} Feb 16 22:23:42 crc kubenswrapper[4792]: I0216 22:23:42.482391 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8pwps" podStartSLOduration=2.925582607 podStartE2EDuration="7.482363s" podCreationTimestamp="2026-02-16 22:23:35 +0000 UTC" firstStartedPulling="2026-02-16 22:23:37.384154353 +0000 UTC m=+2750.037433244" lastFinishedPulling="2026-02-16 22:23:41.940934746 +0000 UTC m=+2754.594213637" observedRunningTime="2026-02-16 22:23:42.474984409 +0000 UTC m=+2755.128263300" watchObservedRunningTime="2026-02-16 22:23:42.482363 +0000 UTC m=+2755.135641891" Feb 16 22:23:45 crc kubenswrapper[4792]: E0216 22:23:45.029853 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:23:46 crc kubenswrapper[4792]: I0216 22:23:46.057080 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:46 crc kubenswrapper[4792]: I0216 22:23:46.057224 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:47 crc kubenswrapper[4792]: I0216 22:23:47.118435 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8pwps" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="registry-server" probeResult="failure" output=< Feb 16 22:23:47 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:23:47 crc kubenswrapper[4792]: > Feb 16 22:23:52 crc kubenswrapper[4792]: E0216 22:23:52.034265 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:23:56 crc kubenswrapper[4792]: I0216 22:23:56.162648 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:56 crc kubenswrapper[4792]: I0216 22:23:56.227280 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:56 crc kubenswrapper[4792]: I0216 22:23:56.408318 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:57 crc kubenswrapper[4792]: I0216 22:23:57.604494 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8pwps" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="registry-server" containerID="cri-o://94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a" gracePeriod=2 Feb 16 22:23:58 crc kubenswrapper[4792]: E0216 22:23:58.088226 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.253972 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.298391 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content\") pod \"d342a681-3890-4bbb-9e49-0c42895eccd3\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.298588 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities\") pod \"d342a681-3890-4bbb-9e49-0c42895eccd3\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.298661 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7czc\" (UniqueName: \"kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc\") pod \"d342a681-3890-4bbb-9e49-0c42895eccd3\" (UID: \"d342a681-3890-4bbb-9e49-0c42895eccd3\") " Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.305959 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc" (OuterVolumeSpecName: "kube-api-access-j7czc") pod "d342a681-3890-4bbb-9e49-0c42895eccd3" (UID: "d342a681-3890-4bbb-9e49-0c42895eccd3"). InnerVolumeSpecName "kube-api-access-j7czc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.306436 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities" (OuterVolumeSpecName: "utilities") pod "d342a681-3890-4bbb-9e49-0c42895eccd3" (UID: "d342a681-3890-4bbb-9e49-0c42895eccd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.401109 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.401146 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7czc\" (UniqueName: \"kubernetes.io/projected/d342a681-3890-4bbb-9e49-0c42895eccd3-kube-api-access-j7czc\") on node \"crc\" DevicePath \"\"" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.427276 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d342a681-3890-4bbb-9e49-0c42895eccd3" (UID: "d342a681-3890-4bbb-9e49-0c42895eccd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.502639 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d342a681-3890-4bbb-9e49-0c42895eccd3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.620641 4792 generic.go:334] "Generic (PLEG): container finished" podID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerID="94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a" exitCode=0 Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.620681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerDied","Data":"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a"} Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.620705 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pwps" event={"ID":"d342a681-3890-4bbb-9e49-0c42895eccd3","Type":"ContainerDied","Data":"f62975125f3ec0270914af40a4325e5654118908fc98770f6da3574954d441fd"} Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.620721 4792 scope.go:117] "RemoveContainer" containerID="94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.620734 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pwps" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.645767 4792 scope.go:117] "RemoveContainer" containerID="5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.666047 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.677297 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8pwps"] Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.678898 4792 scope.go:117] "RemoveContainer" containerID="9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.751334 4792 scope.go:117] "RemoveContainer" containerID="94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a" Feb 16 22:23:58 crc kubenswrapper[4792]: E0216 22:23:58.751974 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a\": container with ID starting with 94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a not found: ID does not exist" containerID="94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.752009 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a"} err="failed to get container status \"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a\": rpc error: code = NotFound desc = could not find container \"94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a\": container with ID starting with 94c3869413b7d2b4527d4f0044bddeb9723591dcf00eb9532a979acb8e8a872a not found: ID does not exist" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.752031 4792 scope.go:117] "RemoveContainer" containerID="5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248" Feb 16 22:23:58 crc kubenswrapper[4792]: E0216 22:23:58.752490 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248\": container with ID starting with 5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248 not found: ID does not exist" containerID="5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.752528 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248"} err="failed to get container status \"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248\": rpc error: code = NotFound desc = could not find container \"5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248\": container with ID starting with 5ad0a47b122db96d6a073ac99d7a76393015991c70e5b4d98f3b3b2279bd0248 not found: ID does not exist" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.752557 4792 scope.go:117] "RemoveContainer" containerID="9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8" Feb 16 22:23:58 crc kubenswrapper[4792]: E0216 22:23:58.752857 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8\": container with ID starting with 9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8 not found: ID does not exist" containerID="9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8" Feb 16 22:23:58 crc kubenswrapper[4792]: I0216 22:23:58.752887 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8"} err="failed to get container status \"9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8\": rpc error: code = NotFound desc = could not find container \"9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8\": container with ID starting with 9ddd6b630860780c5ec06e593772211c6ca045255d9bd728fdedd8a57d453ce8 not found: ID does not exist" Feb 16 22:24:00 crc kubenswrapper[4792]: I0216 22:24:00.042719 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" path="/var/lib/kubelet/pods/d342a681-3890-4bbb-9e49-0c42895eccd3/volumes" Feb 16 22:24:05 crc kubenswrapper[4792]: I0216 22:24:05.030591 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:24:05 crc kubenswrapper[4792]: E0216 22:24:05.147315 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:24:05 crc kubenswrapper[4792]: E0216 22:24:05.147692 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:24:05 crc kubenswrapper[4792]: E0216 22:24:05.147889 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:24:05 crc kubenswrapper[4792]: E0216 22:24:05.154572 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:24:09 crc kubenswrapper[4792]: E0216 22:24:09.129303 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:24:09 crc kubenswrapper[4792]: E0216 22:24:09.129795 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:24:09 crc kubenswrapper[4792]: E0216 22:24:09.129961 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:24:09 crc kubenswrapper[4792]: E0216 22:24:09.132734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:24:20 crc kubenswrapper[4792]: E0216 22:24:20.030493 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:24:20 crc kubenswrapper[4792]: E0216 22:24:20.030871 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:24:31 crc kubenswrapper[4792]: E0216 22:24:31.029233 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:24:33 crc kubenswrapper[4792]: E0216 22:24:33.028343 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:24:45 crc kubenswrapper[4792]: E0216 22:24:45.030238 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:24:46 crc kubenswrapper[4792]: E0216 22:24:46.028671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:24:58 crc kubenswrapper[4792]: E0216 22:24:58.035730 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:25:00 crc kubenswrapper[4792]: E0216 22:25:00.029147 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:25:09 crc kubenswrapper[4792]: E0216 22:25:09.027542 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:25:13 crc kubenswrapper[4792]: E0216 22:25:13.028843 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:25:23 crc kubenswrapper[4792]: E0216 22:25:23.029751 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:25:24 crc kubenswrapper[4792]: E0216 22:25:24.030388 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:25:31 crc kubenswrapper[4792]: I0216 22:25:31.532681 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:25:31 crc kubenswrapper[4792]: I0216 22:25:31.533462 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:25:37 crc kubenswrapper[4792]: E0216 22:25:37.029147 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:25:38 crc kubenswrapper[4792]: E0216 22:25:38.035807 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:25:49 crc kubenswrapper[4792]: E0216 22:25:49.034106 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:25:53 crc kubenswrapper[4792]: E0216 22:25:53.029244 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:01 crc kubenswrapper[4792]: I0216 22:26:01.532340 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:26:01 crc kubenswrapper[4792]: I0216 22:26:01.532970 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:26:04 crc kubenswrapper[4792]: E0216 22:26:04.029876 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:26:08 crc kubenswrapper[4792]: E0216 22:26:08.039014 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:18 crc kubenswrapper[4792]: E0216 22:26:18.038577 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:26:19 crc kubenswrapper[4792]: E0216 22:26:19.030526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:31 crc kubenswrapper[4792]: E0216 22:26:31.031993 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:26:31 crc kubenswrapper[4792]: E0216 22:26:31.032022 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:31 crc kubenswrapper[4792]: I0216 22:26:31.532103 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:26:31 crc kubenswrapper[4792]: I0216 22:26:31.532452 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:26:31 crc kubenswrapper[4792]: I0216 22:26:31.532497 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:26:31 crc kubenswrapper[4792]: I0216 22:26:31.533545 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:26:31 crc kubenswrapper[4792]: I0216 22:26:31.533813 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" gracePeriod=600 Feb 16 22:26:31 crc kubenswrapper[4792]: E0216 22:26:31.658016 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:26:32 crc kubenswrapper[4792]: I0216 22:26:32.482978 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" exitCode=0 Feb 16 22:26:32 crc kubenswrapper[4792]: I0216 22:26:32.483031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e"} Feb 16 22:26:32 crc kubenswrapper[4792]: I0216 22:26:32.483074 4792 scope.go:117] "RemoveContainer" containerID="a88526ac52e3a6b0823b66bdf52bfc3c6e75f1612a565b1641e74977ff16389f" Feb 16 22:26:32 crc kubenswrapper[4792]: I0216 22:26:32.484040 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:26:32 crc kubenswrapper[4792]: E0216 22:26:32.484485 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.353988 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:38 crc kubenswrapper[4792]: E0216 22:26:38.355201 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="extract-utilities" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.355223 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="extract-utilities" Feb 16 22:26:38 crc kubenswrapper[4792]: E0216 22:26:38.355249 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="extract-content" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.355257 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="extract-content" Feb 16 22:26:38 crc kubenswrapper[4792]: E0216 22:26:38.355296 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="registry-server" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.355305 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="registry-server" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.359258 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d342a681-3890-4bbb-9e49-0c42895eccd3" containerName="registry-server" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.361755 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.382753 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.451965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jbnp\" (UniqueName: \"kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.452301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.452331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.554910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jbnp\" (UniqueName: \"kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.554974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.555019 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.555553 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.555645 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.575430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jbnp\" (UniqueName: \"kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp\") pod \"community-operators-5pxhs\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:38 crc kubenswrapper[4792]: I0216 22:26:38.686818 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:39 crc kubenswrapper[4792]: I0216 22:26:39.258175 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:39 crc kubenswrapper[4792]: I0216 22:26:39.555586 4792 generic.go:334] "Generic (PLEG): container finished" podID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerID="ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c" exitCode=0 Feb 16 22:26:39 crc kubenswrapper[4792]: I0216 22:26:39.555924 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerDied","Data":"ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c"} Feb 16 22:26:39 crc kubenswrapper[4792]: I0216 22:26:39.556153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerStarted","Data":"c747268b7641d31209187d97fbbdafe2420430e75f427aa106ce731c487e186c"} Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.545316 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.549063 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.561681 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.590831 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerStarted","Data":"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce"} Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.717687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.718057 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxxf\" (UniqueName: \"kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.718235 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.820208 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.820368 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.820466 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxxf\" (UniqueName: \"kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.820824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.820847 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.860680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxxf\" (UniqueName: \"kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf\") pod \"redhat-marketplace-ws8mr\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:40 crc kubenswrapper[4792]: I0216 22:26:40.915203 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:41 crc kubenswrapper[4792]: I0216 22:26:41.413963 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:41 crc kubenswrapper[4792]: I0216 22:26:41.601870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerStarted","Data":"233a4c7b0b08be7a36d72fcf3ba1f1ab27b743e860efd5997d6805694a978ebc"} Feb 16 22:26:42 crc kubenswrapper[4792]: I0216 22:26:42.614492 4792 generic.go:334] "Generic (PLEG): container finished" podID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerID="37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a" exitCode=0 Feb 16 22:26:42 crc kubenswrapper[4792]: I0216 22:26:42.614545 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerDied","Data":"37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a"} Feb 16 22:26:42 crc kubenswrapper[4792]: I0216 22:26:42.617581 4792 generic.go:334] "Generic (PLEG): container finished" podID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerID="8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce" exitCode=0 Feb 16 22:26:42 crc kubenswrapper[4792]: I0216 22:26:42.617624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerDied","Data":"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce"} Feb 16 22:26:43 crc kubenswrapper[4792]: I0216 22:26:43.633847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerStarted","Data":"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a"} Feb 16 22:26:43 crc kubenswrapper[4792]: I0216 22:26:43.637525 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerStarted","Data":"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c"} Feb 16 22:26:43 crc kubenswrapper[4792]: I0216 22:26:43.672190 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5pxhs" podStartSLOduration=2.215458847 podStartE2EDuration="5.672169863s" podCreationTimestamp="2026-02-16 22:26:38 +0000 UTC" firstStartedPulling="2026-02-16 22:26:39.558078011 +0000 UTC m=+2932.211356902" lastFinishedPulling="2026-02-16 22:26:43.014789017 +0000 UTC m=+2935.668067918" observedRunningTime="2026-02-16 22:26:43.666264002 +0000 UTC m=+2936.319542923" watchObservedRunningTime="2026-02-16 22:26:43.672169863 +0000 UTC m=+2936.325448754" Feb 16 22:26:44 crc kubenswrapper[4792]: E0216 22:26:44.048811 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:44 crc kubenswrapper[4792]: I0216 22:26:44.655366 4792 generic.go:334] "Generic (PLEG): container finished" podID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerID="f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c" exitCode=0 Feb 16 22:26:44 crc kubenswrapper[4792]: I0216 22:26:44.655418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerDied","Data":"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c"} Feb 16 22:26:45 crc kubenswrapper[4792]: E0216 22:26:45.028368 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:26:45 crc kubenswrapper[4792]: I0216 22:26:45.668250 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerStarted","Data":"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb"} Feb 16 22:26:45 crc kubenswrapper[4792]: I0216 22:26:45.695737 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ws8mr" podStartSLOduration=3.211415639 podStartE2EDuration="5.695718381s" podCreationTimestamp="2026-02-16 22:26:40 +0000 UTC" firstStartedPulling="2026-02-16 22:26:42.620049071 +0000 UTC m=+2935.273327962" lastFinishedPulling="2026-02-16 22:26:45.104351813 +0000 UTC m=+2937.757630704" observedRunningTime="2026-02-16 22:26:45.689092602 +0000 UTC m=+2938.342371483" watchObservedRunningTime="2026-02-16 22:26:45.695718381 +0000 UTC m=+2938.348997272" Feb 16 22:26:47 crc kubenswrapper[4792]: I0216 22:26:47.027333 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:26:47 crc kubenswrapper[4792]: E0216 22:26:47.027670 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:26:48 crc kubenswrapper[4792]: I0216 22:26:48.687010 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:48 crc kubenswrapper[4792]: I0216 22:26:48.688696 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:48 crc kubenswrapper[4792]: I0216 22:26:48.757182 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:49 crc kubenswrapper[4792]: I0216 22:26:49.771902 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:50 crc kubenswrapper[4792]: I0216 22:26:50.135796 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:50 crc kubenswrapper[4792]: I0216 22:26:50.918830 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:50 crc kubenswrapper[4792]: I0216 22:26:50.919181 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:51 crc kubenswrapper[4792]: I0216 22:26:51.018844 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:51 crc kubenswrapper[4792]: I0216 22:26:51.740966 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5pxhs" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="registry-server" containerID="cri-o://16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a" gracePeriod=2 Feb 16 22:26:51 crc kubenswrapper[4792]: I0216 22:26:51.813246 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.312161 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.428009 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") pod \"e9392534-ecbe-4c47-911f-5da1ea52e719\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.428113 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities\") pod \"e9392534-ecbe-4c47-911f-5da1ea52e719\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.428383 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jbnp\" (UniqueName: \"kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp\") pod \"e9392534-ecbe-4c47-911f-5da1ea52e719\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.429848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities" (OuterVolumeSpecName: "utilities") pod "e9392534-ecbe-4c47-911f-5da1ea52e719" (UID: "e9392534-ecbe-4c47-911f-5da1ea52e719"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.438112 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp" (OuterVolumeSpecName: "kube-api-access-9jbnp") pod "e9392534-ecbe-4c47-911f-5da1ea52e719" (UID: "e9392534-ecbe-4c47-911f-5da1ea52e719"). InnerVolumeSpecName "kube-api-access-9jbnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.530633 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9392534-ecbe-4c47-911f-5da1ea52e719" (UID: "e9392534-ecbe-4c47-911f-5da1ea52e719"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.531808 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") pod \"e9392534-ecbe-4c47-911f-5da1ea52e719\" (UID: \"e9392534-ecbe-4c47-911f-5da1ea52e719\") " Feb 16 22:26:52 crc kubenswrapper[4792]: W0216 22:26:52.531997 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e9392534-ecbe-4c47-911f-5da1ea52e719/volumes/kubernetes.io~empty-dir/catalog-content Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.532025 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9392534-ecbe-4c47-911f-5da1ea52e719" (UID: "e9392534-ecbe-4c47-911f-5da1ea52e719"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.533461 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.533505 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9392534-ecbe-4c47-911f-5da1ea52e719-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.533523 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jbnp\" (UniqueName: \"kubernetes.io/projected/e9392534-ecbe-4c47-911f-5da1ea52e719-kube-api-access-9jbnp\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.546907 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.759006 4792 generic.go:334] "Generic (PLEG): container finished" podID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerID="16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a" exitCode=0 Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.759067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerDied","Data":"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a"} Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.759099 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pxhs" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.759122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pxhs" event={"ID":"e9392534-ecbe-4c47-911f-5da1ea52e719","Type":"ContainerDied","Data":"c747268b7641d31209187d97fbbdafe2420430e75f427aa106ce731c487e186c"} Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.759145 4792 scope.go:117] "RemoveContainer" containerID="16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.761935 4792 generic.go:334] "Generic (PLEG): container finished" podID="e792897f-1081-40d9-8e65-3f3ac21cd119" containerID="8918ee66eaae21b6d5499fe9b23a32191b242b3ab03aed8d6e83c043bae5d8a9" exitCode=2 Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.761970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" event={"ID":"e792897f-1081-40d9-8e65-3f3ac21cd119","Type":"ContainerDied","Data":"8918ee66eaae21b6d5499fe9b23a32191b242b3ab03aed8d6e83c043bae5d8a9"} Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.805498 4792 scope.go:117] "RemoveContainer" containerID="8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.832126 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.837997 4792 scope.go:117] "RemoveContainer" containerID="ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.838303 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5pxhs"] Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.905220 4792 scope.go:117] "RemoveContainer" containerID="16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a" Feb 16 22:26:52 crc kubenswrapper[4792]: E0216 22:26:52.905815 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a\": container with ID starting with 16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a not found: ID does not exist" containerID="16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.905866 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a"} err="failed to get container status \"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a\": rpc error: code = NotFound desc = could not find container \"16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a\": container with ID starting with 16861e387b59b08682ab5f6030ab0747a0f2015c6e60509ed47399c76da07b5a not found: ID does not exist" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.905901 4792 scope.go:117] "RemoveContainer" containerID="8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce" Feb 16 22:26:52 crc kubenswrapper[4792]: E0216 22:26:52.906326 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce\": container with ID starting with 8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce not found: ID does not exist" containerID="8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.906361 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce"} err="failed to get container status \"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce\": rpc error: code = NotFound desc = could not find container \"8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce\": container with ID starting with 8ba668a7cfd2a5c7823fe88b29027f1218bf5f53fdf92a892bf8763f704afcce not found: ID does not exist" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.906384 4792 scope.go:117] "RemoveContainer" containerID="ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c" Feb 16 22:26:52 crc kubenswrapper[4792]: E0216 22:26:52.906706 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c\": container with ID starting with ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c not found: ID does not exist" containerID="ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c" Feb 16 22:26:52 crc kubenswrapper[4792]: I0216 22:26:52.906729 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c"} err="failed to get container status \"ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c\": rpc error: code = NotFound desc = could not find container \"ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c\": container with ID starting with ae28c5e0f4517564478aa09d9640fc065478f2a90a49da73f280de54ad572d8c not found: ID does not exist" Feb 16 22:26:53 crc kubenswrapper[4792]: I0216 22:26:53.777622 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ws8mr" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="registry-server" containerID="cri-o://55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb" gracePeriod=2 Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.043319 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" path="/var/lib/kubelet/pods/e9392534-ecbe-4c47-911f-5da1ea52e719/volumes" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.404450 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.413018 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.494455 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam\") pod \"e792897f-1081-40d9-8e65-3f3ac21cd119\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.494577 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7\") pod \"e792897f-1081-40d9-8e65-3f3ac21cd119\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.494789 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content\") pod \"7467294d-0ce2-4ecd-a857-dafa3f718355\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.494853 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities\") pod \"7467294d-0ce2-4ecd-a857-dafa3f718355\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.494920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rxxf\" (UniqueName: \"kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf\") pod \"7467294d-0ce2-4ecd-a857-dafa3f718355\" (UID: \"7467294d-0ce2-4ecd-a857-dafa3f718355\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.495006 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory\") pod \"e792897f-1081-40d9-8e65-3f3ac21cd119\" (UID: \"e792897f-1081-40d9-8e65-3f3ac21cd119\") " Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.497167 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities" (OuterVolumeSpecName: "utilities") pod "7467294d-0ce2-4ecd-a857-dafa3f718355" (UID: "7467294d-0ce2-4ecd-a857-dafa3f718355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.506915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7" (OuterVolumeSpecName: "kube-api-access-kplw7") pod "e792897f-1081-40d9-8e65-3f3ac21cd119" (UID: "e792897f-1081-40d9-8e65-3f3ac21cd119"). InnerVolumeSpecName "kube-api-access-kplw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.517074 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf" (OuterVolumeSpecName: "kube-api-access-4rxxf") pod "7467294d-0ce2-4ecd-a857-dafa3f718355" (UID: "7467294d-0ce2-4ecd-a857-dafa3f718355"). InnerVolumeSpecName "kube-api-access-4rxxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.559119 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e792897f-1081-40d9-8e65-3f3ac21cd119" (UID: "e792897f-1081-40d9-8e65-3f3ac21cd119"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.572870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7467294d-0ce2-4ecd-a857-dafa3f718355" (UID: "7467294d-0ce2-4ecd-a857-dafa3f718355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.593524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory" (OuterVolumeSpecName: "inventory") pod "e792897f-1081-40d9-8e65-3f3ac21cd119" (UID: "e792897f-1081-40d9-8e65-3f3ac21cd119"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598179 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598207 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7467294d-0ce2-4ecd-a857-dafa3f718355-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598217 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rxxf\" (UniqueName: \"kubernetes.io/projected/7467294d-0ce2-4ecd-a857-dafa3f718355-kube-api-access-4rxxf\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598228 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598237 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e792897f-1081-40d9-8e65-3f3ac21cd119-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.598245 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kplw7\" (UniqueName: \"kubernetes.io/projected/e792897f-1081-40d9-8e65-3f3ac21cd119-kube-api-access-kplw7\") on node \"crc\" DevicePath \"\"" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.791489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" event={"ID":"e792897f-1081-40d9-8e65-3f3ac21cd119","Type":"ContainerDied","Data":"d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c"} Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.791932 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47266c65cb97c4afb7a760e3826cdc48d8770d1396ddcfe2dc9586a36d75e0c" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.791525 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.794271 4792 generic.go:334] "Generic (PLEG): container finished" podID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerID="55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb" exitCode=0 Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.794324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerDied","Data":"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb"} Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.794360 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws8mr" event={"ID":"7467294d-0ce2-4ecd-a857-dafa3f718355","Type":"ContainerDied","Data":"233a4c7b0b08be7a36d72fcf3ba1f1ab27b743e860efd5997d6805694a978ebc"} Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.794393 4792 scope.go:117] "RemoveContainer" containerID="55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.795726 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws8mr" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.831298 4792 scope.go:117] "RemoveContainer" containerID="f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.849936 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.868581 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws8mr"] Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.871209 4792 scope.go:117] "RemoveContainer" containerID="37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.910412 4792 scope.go:117] "RemoveContainer" containerID="55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb" Feb 16 22:26:54 crc kubenswrapper[4792]: E0216 22:26:54.911270 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb\": container with ID starting with 55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb not found: ID does not exist" containerID="55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.911330 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb"} err="failed to get container status \"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb\": rpc error: code = NotFound desc = could not find container \"55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb\": container with ID starting with 55758ee8326246e6ffe08977786ef88d48e2e81f6a77989e7856dea1c644c4fb not found: ID does not exist" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.911367 4792 scope.go:117] "RemoveContainer" containerID="f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c" Feb 16 22:26:54 crc kubenswrapper[4792]: E0216 22:26:54.912040 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c\": container with ID starting with f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c not found: ID does not exist" containerID="f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.912075 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c"} err="failed to get container status \"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c\": rpc error: code = NotFound desc = could not find container \"f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c\": container with ID starting with f936ff57da6f94024b142c6cef9e54e9e3e0384cad5ff73d47e6d537fa811b2c not found: ID does not exist" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.912095 4792 scope.go:117] "RemoveContainer" containerID="37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a" Feb 16 22:26:54 crc kubenswrapper[4792]: E0216 22:26:54.912369 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a\": container with ID starting with 37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a not found: ID does not exist" containerID="37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a" Feb 16 22:26:54 crc kubenswrapper[4792]: I0216 22:26:54.912422 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a"} err="failed to get container status \"37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a\": rpc error: code = NotFound desc = could not find container \"37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a\": container with ID starting with 37f1bd71091e2179035f7d39608e85989012fbf0904d08ee3a62e51a4cc9749a not found: ID does not exist" Feb 16 22:26:56 crc kubenswrapper[4792]: E0216 22:26:56.032985 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:26:56 crc kubenswrapper[4792]: I0216 22:26:56.064384 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" path="/var/lib/kubelet/pods/7467294d-0ce2-4ecd-a857-dafa3f718355/volumes" Feb 16 22:27:00 crc kubenswrapper[4792]: E0216 22:27:00.028397 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:27:01 crc kubenswrapper[4792]: I0216 22:27:01.026369 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:27:01 crc kubenswrapper[4792]: E0216 22:27:01.026953 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:27:11 crc kubenswrapper[4792]: E0216 22:27:11.028736 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:27:12 crc kubenswrapper[4792]: E0216 22:27:12.029456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:27:13 crc kubenswrapper[4792]: I0216 22:27:13.029142 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:27:13 crc kubenswrapper[4792]: E0216 22:27:13.029917 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:27:23 crc kubenswrapper[4792]: E0216 22:27:23.028833 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:27:23 crc kubenswrapper[4792]: E0216 22:27:23.028837 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:27:28 crc kubenswrapper[4792]: I0216 22:27:28.036919 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:27:28 crc kubenswrapper[4792]: E0216 22:27:28.037907 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.053217 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd"] Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054441 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="extract-content" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054468 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="extract-content" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054511 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054525 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054555 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="extract-content" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054567 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="extract-content" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054636 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="extract-utilities" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054651 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="extract-utilities" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054680 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e792897f-1081-40d9-8e65-3f3ac21cd119" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054695 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e792897f-1081-40d9-8e65-3f3ac21cd119" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054741 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="extract-utilities" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054755 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="extract-utilities" Feb 16 22:27:32 crc kubenswrapper[4792]: E0216 22:27:32.054783 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.054794 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.055179 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9392534-ecbe-4c47-911f-5da1ea52e719" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.055202 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7467294d-0ce2-4ecd-a857-dafa3f718355" containerName="registry-server" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.055253 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e792897f-1081-40d9-8e65-3f3ac21cd119" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.056691 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.059463 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd"] Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.079059 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.079873 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.080013 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.080746 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.080947 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.081109 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrm9\" (UniqueName: \"kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.081873 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.183805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.183888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrm9\" (UniqueName: \"kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.184098 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.189670 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.189918 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.203624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrm9\" (UniqueName: \"kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.398948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:27:32 crc kubenswrapper[4792]: I0216 22:27:32.957780 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd"] Feb 16 22:27:33 crc kubenswrapper[4792]: I0216 22:27:33.217382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" event={"ID":"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba","Type":"ContainerStarted","Data":"db2de9886e4effe72c3ec73ee7873bdaa621c531afd07845e4f2f87ed48034c1"} Feb 16 22:27:34 crc kubenswrapper[4792]: I0216 22:27:34.234576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" event={"ID":"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba","Type":"ContainerStarted","Data":"0eadbfa37b7edaccc7c38d49dd52e9ae8367f0774d9d2768a85c5fb232e29cc0"} Feb 16 22:27:34 crc kubenswrapper[4792]: I0216 22:27:34.266929 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" podStartSLOduration=1.621085083 podStartE2EDuration="2.266912096s" podCreationTimestamp="2026-02-16 22:27:32 +0000 UTC" firstStartedPulling="2026-02-16 22:27:32.960000967 +0000 UTC m=+2985.613279858" lastFinishedPulling="2026-02-16 22:27:33.60582798 +0000 UTC m=+2986.259106871" observedRunningTime="2026-02-16 22:27:34.256339629 +0000 UTC m=+2986.909618520" watchObservedRunningTime="2026-02-16 22:27:34.266912096 +0000 UTC m=+2986.920190987" Feb 16 22:27:37 crc kubenswrapper[4792]: E0216 22:27:37.029240 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:27:38 crc kubenswrapper[4792]: E0216 22:27:38.036466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:27:39 crc kubenswrapper[4792]: I0216 22:27:39.026512 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:27:39 crc kubenswrapper[4792]: E0216 22:27:39.027180 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:27:50 crc kubenswrapper[4792]: E0216 22:27:50.028686 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:27:50 crc kubenswrapper[4792]: E0216 22:27:50.028725 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:27:53 crc kubenswrapper[4792]: I0216 22:27:53.026903 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:27:53 crc kubenswrapper[4792]: E0216 22:27:53.027550 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:02 crc kubenswrapper[4792]: E0216 22:28:02.030179 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:28:03 crc kubenswrapper[4792]: E0216 22:28:03.027848 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:28:04 crc kubenswrapper[4792]: I0216 22:28:04.026678 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:28:04 crc kubenswrapper[4792]: E0216 22:28:04.027297 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:16 crc kubenswrapper[4792]: E0216 22:28:16.030318 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:28:16 crc kubenswrapper[4792]: E0216 22:28:16.030852 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:28:17 crc kubenswrapper[4792]: I0216 22:28:17.027168 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:28:17 crc kubenswrapper[4792]: E0216 22:28:17.028082 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:29 crc kubenswrapper[4792]: E0216 22:28:29.028489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:28:30 crc kubenswrapper[4792]: I0216 22:28:30.026846 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:28:30 crc kubenswrapper[4792]: E0216 22:28:30.027294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:31 crc kubenswrapper[4792]: E0216 22:28:31.029383 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:28:43 crc kubenswrapper[4792]: I0216 22:28:43.026455 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:28:43 crc kubenswrapper[4792]: E0216 22:28:43.029214 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:28:43 crc kubenswrapper[4792]: E0216 22:28:43.031443 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:46 crc kubenswrapper[4792]: E0216 22:28:46.029515 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:28:56 crc kubenswrapper[4792]: I0216 22:28:56.026996 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:28:56 crc kubenswrapper[4792]: E0216 22:28:56.028159 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:28:58 crc kubenswrapper[4792]: E0216 22:28:58.028909 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:28:58 crc kubenswrapper[4792]: E0216 22:28:58.047033 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:29:09 crc kubenswrapper[4792]: I0216 22:29:09.027310 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:29:09 crc kubenswrapper[4792]: E0216 22:29:09.028356 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:29:10 crc kubenswrapper[4792]: I0216 22:29:10.030205 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:29:10 crc kubenswrapper[4792]: E0216 22:29:10.160220 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:29:10 crc kubenswrapper[4792]: E0216 22:29:10.160303 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:29:10 crc kubenswrapper[4792]: E0216 22:29:10.160464 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:29:10 crc kubenswrapper[4792]: E0216 22:29:10.161707 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:29:13 crc kubenswrapper[4792]: E0216 22:29:13.151036 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:29:13 crc kubenswrapper[4792]: E0216 22:29:13.151492 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:29:13 crc kubenswrapper[4792]: E0216 22:29:13.151827 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:29:13 crc kubenswrapper[4792]: E0216 22:29:13.153688 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:29:20 crc kubenswrapper[4792]: I0216 22:29:20.027253 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:29:20 crc kubenswrapper[4792]: E0216 22:29:20.029806 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:29:22 crc kubenswrapper[4792]: E0216 22:29:22.030170 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:29:28 crc kubenswrapper[4792]: E0216 22:29:28.036423 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:29:34 crc kubenswrapper[4792]: I0216 22:29:34.026632 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:29:34 crc kubenswrapper[4792]: E0216 22:29:34.027347 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:29:36 crc kubenswrapper[4792]: E0216 22:29:36.028775 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:29:39 crc kubenswrapper[4792]: E0216 22:29:39.028491 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:29:49 crc kubenswrapper[4792]: I0216 22:29:49.028259 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:29:49 crc kubenswrapper[4792]: E0216 22:29:49.030578 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:29:51 crc kubenswrapper[4792]: E0216 22:29:51.027656 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:29:51 crc kubenswrapper[4792]: E0216 22:29:51.027815 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.028066 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:30:00 crc kubenswrapper[4792]: E0216 22:30:00.029071 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.158270 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk"] Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.160216 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.164563 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.165437 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.186364 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk"] Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.238408 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.239022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.239336 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h8tv\" (UniqueName: \"kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.341438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h8tv\" (UniqueName: \"kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.341673 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.341786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.343698 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.358830 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.363661 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h8tv\" (UniqueName: \"kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv\") pod \"collect-profiles-29521350-2qsxk\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.489284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:00 crc kubenswrapper[4792]: I0216 22:30:00.996047 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk"] Feb 16 22:30:01 crc kubenswrapper[4792]: I0216 22:30:01.302637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" event={"ID":"b068db64-d873-4f93-b01a-7775abe02348","Type":"ContainerStarted","Data":"69929d9ab6871b54421bb1ebfd3c9e0df59b14371428b8b88c8e7cd1b747aa90"} Feb 16 22:30:01 crc kubenswrapper[4792]: I0216 22:30:01.302691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" event={"ID":"b068db64-d873-4f93-b01a-7775abe02348","Type":"ContainerStarted","Data":"943732fddf6bcdd0c7530db8e4942ccb24a1a5381be69a73985d3c0c16c91ee7"} Feb 16 22:30:01 crc kubenswrapper[4792]: I0216 22:30:01.338821 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" podStartSLOduration=1.338797676 podStartE2EDuration="1.338797676s" podCreationTimestamp="2026-02-16 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:30:01.331982521 +0000 UTC m=+3133.985261412" watchObservedRunningTime="2026-02-16 22:30:01.338797676 +0000 UTC m=+3133.992076577" Feb 16 22:30:02 crc kubenswrapper[4792]: E0216 22:30:02.028177 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:30:02 crc kubenswrapper[4792]: I0216 22:30:02.312959 4792 generic.go:334] "Generic (PLEG): container finished" podID="b068db64-d873-4f93-b01a-7775abe02348" containerID="69929d9ab6871b54421bb1ebfd3c9e0df59b14371428b8b88c8e7cd1b747aa90" exitCode=0 Feb 16 22:30:02 crc kubenswrapper[4792]: I0216 22:30:02.313010 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" event={"ID":"b068db64-d873-4f93-b01a-7775abe02348","Type":"ContainerDied","Data":"69929d9ab6871b54421bb1ebfd3c9e0df59b14371428b8b88c8e7cd1b747aa90"} Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.685257 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.740281 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume\") pod \"b068db64-d873-4f93-b01a-7775abe02348\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.740486 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h8tv\" (UniqueName: \"kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv\") pod \"b068db64-d873-4f93-b01a-7775abe02348\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.740665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume\") pod \"b068db64-d873-4f93-b01a-7775abe02348\" (UID: \"b068db64-d873-4f93-b01a-7775abe02348\") " Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.743043 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume" (OuterVolumeSpecName: "config-volume") pod "b068db64-d873-4f93-b01a-7775abe02348" (UID: "b068db64-d873-4f93-b01a-7775abe02348"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.747836 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv" (OuterVolumeSpecName: "kube-api-access-8h8tv") pod "b068db64-d873-4f93-b01a-7775abe02348" (UID: "b068db64-d873-4f93-b01a-7775abe02348"). InnerVolumeSpecName "kube-api-access-8h8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.749829 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b068db64-d873-4f93-b01a-7775abe02348" (UID: "b068db64-d873-4f93-b01a-7775abe02348"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.843847 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b068db64-d873-4f93-b01a-7775abe02348-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.843896 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h8tv\" (UniqueName: \"kubernetes.io/projected/b068db64-d873-4f93-b01a-7775abe02348-kube-api-access-8h8tv\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:03 crc kubenswrapper[4792]: I0216 22:30:03.843908 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b068db64-d873-4f93-b01a-7775abe02348-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:30:04 crc kubenswrapper[4792]: I0216 22:30:04.336034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" event={"ID":"b068db64-d873-4f93-b01a-7775abe02348","Type":"ContainerDied","Data":"943732fddf6bcdd0c7530db8e4942ccb24a1a5381be69a73985d3c0c16c91ee7"} Feb 16 22:30:04 crc kubenswrapper[4792]: I0216 22:30:04.336357 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943732fddf6bcdd0c7530db8e4942ccb24a1a5381be69a73985d3c0c16c91ee7" Feb 16 22:30:04 crc kubenswrapper[4792]: I0216 22:30:04.336076 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk" Feb 16 22:30:04 crc kubenswrapper[4792]: I0216 22:30:04.771825 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw"] Feb 16 22:30:04 crc kubenswrapper[4792]: I0216 22:30:04.786264 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-69chw"] Feb 16 22:30:06 crc kubenswrapper[4792]: E0216 22:30:06.030489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:06 crc kubenswrapper[4792]: I0216 22:30:06.042792 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724f6800-0c88-4704-b4fe-a7a3df7b7783" path="/var/lib/kubelet/pods/724f6800-0c88-4704-b4fe-a7a3df7b7783/volumes" Feb 16 22:30:13 crc kubenswrapper[4792]: E0216 22:30:13.030131 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:30:15 crc kubenswrapper[4792]: I0216 22:30:15.026934 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:30:15 crc kubenswrapper[4792]: E0216 22:30:15.028633 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:30:20 crc kubenswrapper[4792]: E0216 22:30:20.028365 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:26 crc kubenswrapper[4792]: E0216 22:30:26.028161 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:30:28 crc kubenswrapper[4792]: I0216 22:30:28.034348 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:30:28 crc kubenswrapper[4792]: E0216 22:30:28.035066 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:30:31 crc kubenswrapper[4792]: E0216 22:30:31.030967 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:39 crc kubenswrapper[4792]: E0216 22:30:39.030002 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:30:42 crc kubenswrapper[4792]: E0216 22:30:42.028707 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:43 crc kubenswrapper[4792]: I0216 22:30:43.027822 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:30:43 crc kubenswrapper[4792]: E0216 22:30:43.028479 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:30:53 crc kubenswrapper[4792]: E0216 22:30:53.028970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:30:54 crc kubenswrapper[4792]: E0216 22:30:54.029417 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:30:57 crc kubenswrapper[4792]: I0216 22:30:57.026522 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:30:57 crc kubenswrapper[4792]: E0216 22:30:57.027378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:31:01 crc kubenswrapper[4792]: I0216 22:31:01.183389 4792 scope.go:117] "RemoveContainer" containerID="864c464d1808ca9d4ac750e3ed44001320159ce91f51f5af29620dda2adc4352" Feb 16 22:31:06 crc kubenswrapper[4792]: E0216 22:31:06.030270 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:31:07 crc kubenswrapper[4792]: E0216 22:31:07.032359 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:31:11 crc kubenswrapper[4792]: I0216 22:31:11.027231 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:31:11 crc kubenswrapper[4792]: E0216 22:31:11.028326 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:31:20 crc kubenswrapper[4792]: E0216 22:31:20.029687 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:31:21 crc kubenswrapper[4792]: E0216 22:31:21.028031 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:31:24 crc kubenswrapper[4792]: I0216 22:31:24.026989 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:31:24 crc kubenswrapper[4792]: E0216 22:31:24.027874 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:31:31 crc kubenswrapper[4792]: E0216 22:31:31.030617 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:31:35 crc kubenswrapper[4792]: I0216 22:31:35.026588 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:31:35 crc kubenswrapper[4792]: I0216 22:31:35.468430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d"} Feb 16 22:31:36 crc kubenswrapper[4792]: E0216 22:31:36.033490 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.387385 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:39 crc kubenswrapper[4792]: E0216 22:31:39.388565 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b068db64-d873-4f93-b01a-7775abe02348" containerName="collect-profiles" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.388623 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b068db64-d873-4f93-b01a-7775abe02348" containerName="collect-profiles" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.388960 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b068db64-d873-4f93-b01a-7775abe02348" containerName="collect-profiles" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.391256 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.433622 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.521000 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.521083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfv4\" (UniqueName: \"kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.521151 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.624952 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.625153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.625183 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfv4\" (UniqueName: \"kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.625868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.625966 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.663065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfv4\" (UniqueName: \"kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4\") pod \"certified-operators-dg4qz\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:39 crc kubenswrapper[4792]: I0216 22:31:39.716628 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:40 crc kubenswrapper[4792]: I0216 22:31:40.257899 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:40 crc kubenswrapper[4792]: I0216 22:31:40.531316 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerStarted","Data":"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac"} Feb 16 22:31:40 crc kubenswrapper[4792]: I0216 22:31:40.532537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerStarted","Data":"3b2fac5763d680d3195fcccc57749cbbd4194b132e7a76a1f5d1623dbe490fbb"} Feb 16 22:31:41 crc kubenswrapper[4792]: I0216 22:31:41.542867 4792 generic.go:334] "Generic (PLEG): container finished" podID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerID="f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac" exitCode=0 Feb 16 22:31:41 crc kubenswrapper[4792]: I0216 22:31:41.542925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerDied","Data":"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac"} Feb 16 22:31:42 crc kubenswrapper[4792]: I0216 22:31:42.557343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerStarted","Data":"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd"} Feb 16 22:31:43 crc kubenswrapper[4792]: E0216 22:31:43.028860 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:31:44 crc kubenswrapper[4792]: I0216 22:31:44.588021 4792 generic.go:334] "Generic (PLEG): container finished" podID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerID="60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd" exitCode=0 Feb 16 22:31:44 crc kubenswrapper[4792]: I0216 22:31:44.588099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerDied","Data":"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd"} Feb 16 22:31:45 crc kubenswrapper[4792]: I0216 22:31:45.601258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerStarted","Data":"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722"} Feb 16 22:31:45 crc kubenswrapper[4792]: I0216 22:31:45.630043 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dg4qz" podStartSLOduration=3.159349038 podStartE2EDuration="6.630024668s" podCreationTimestamp="2026-02-16 22:31:39 +0000 UTC" firstStartedPulling="2026-02-16 22:31:41.545998268 +0000 UTC m=+3234.199277159" lastFinishedPulling="2026-02-16 22:31:45.016673858 +0000 UTC m=+3237.669952789" observedRunningTime="2026-02-16 22:31:45.620270984 +0000 UTC m=+3238.273549875" watchObservedRunningTime="2026-02-16 22:31:45.630024668 +0000 UTC m=+3238.283303559" Feb 16 22:31:47 crc kubenswrapper[4792]: E0216 22:31:47.027673 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:31:49 crc kubenswrapper[4792]: I0216 22:31:49.717400 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:49 crc kubenswrapper[4792]: I0216 22:31:49.717849 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:49 crc kubenswrapper[4792]: I0216 22:31:49.764308 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:50 crc kubenswrapper[4792]: I0216 22:31:50.732636 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:50 crc kubenswrapper[4792]: I0216 22:31:50.784947 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:52 crc kubenswrapper[4792]: I0216 22:31:52.704767 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dg4qz" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="registry-server" containerID="cri-o://38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722" gracePeriod=2 Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.269357 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.392498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities\") pod \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.392659 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content\") pod \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.393057 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfv4\" (UniqueName: \"kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4\") pod \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\" (UID: \"69b8b4ee-338d-4ccc-8876-85d3fc8ce165\") " Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.393372 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities" (OuterVolumeSpecName: "utilities") pod "69b8b4ee-338d-4ccc-8876-85d3fc8ce165" (UID: "69b8b4ee-338d-4ccc-8876-85d3fc8ce165"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.393626 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.399396 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4" (OuterVolumeSpecName: "kube-api-access-5qfv4") pod "69b8b4ee-338d-4ccc-8876-85d3fc8ce165" (UID: "69b8b4ee-338d-4ccc-8876-85d3fc8ce165"). InnerVolumeSpecName "kube-api-access-5qfv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.449248 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b8b4ee-338d-4ccc-8876-85d3fc8ce165" (UID: "69b8b4ee-338d-4ccc-8876-85d3fc8ce165"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.496144 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfv4\" (UniqueName: \"kubernetes.io/projected/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-kube-api-access-5qfv4\") on node \"crc\" DevicePath \"\"" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.496181 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b8b4ee-338d-4ccc-8876-85d3fc8ce165-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.718151 4792 generic.go:334] "Generic (PLEG): container finished" podID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerID="38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722" exitCode=0 Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.718229 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg4qz" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.718266 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerDied","Data":"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722"} Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.718584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg4qz" event={"ID":"69b8b4ee-338d-4ccc-8876-85d3fc8ce165","Type":"ContainerDied","Data":"3b2fac5763d680d3195fcccc57749cbbd4194b132e7a76a1f5d1623dbe490fbb"} Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.718653 4792 scope.go:117] "RemoveContainer" containerID="38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.751734 4792 scope.go:117] "RemoveContainer" containerID="60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.764772 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.775882 4792 scope.go:117] "RemoveContainer" containerID="f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.784500 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dg4qz"] Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.855226 4792 scope.go:117] "RemoveContainer" containerID="38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722" Feb 16 22:31:53 crc kubenswrapper[4792]: E0216 22:31:53.855746 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722\": container with ID starting with 38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722 not found: ID does not exist" containerID="38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.855791 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722"} err="failed to get container status \"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722\": rpc error: code = NotFound desc = could not find container \"38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722\": container with ID starting with 38a5c508aea30ce36908be956cd060dc20d723d0921985e86356622e965db722 not found: ID does not exist" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.855817 4792 scope.go:117] "RemoveContainer" containerID="60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd" Feb 16 22:31:53 crc kubenswrapper[4792]: E0216 22:31:53.856199 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd\": container with ID starting with 60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd not found: ID does not exist" containerID="60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.856255 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd"} err="failed to get container status \"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd\": rpc error: code = NotFound desc = could not find container \"60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd\": container with ID starting with 60ecbcc2684a2c312754812c478abacf185ee536934c5b6e9ab2aa3e93f1ebdd not found: ID does not exist" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.856302 4792 scope.go:117] "RemoveContainer" containerID="f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac" Feb 16 22:31:53 crc kubenswrapper[4792]: E0216 22:31:53.856735 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac\": container with ID starting with f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac not found: ID does not exist" containerID="f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac" Feb 16 22:31:53 crc kubenswrapper[4792]: I0216 22:31:53.856789 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac"} err="failed to get container status \"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac\": rpc error: code = NotFound desc = could not find container \"f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac\": container with ID starting with f95e26d20707f651886a406f5f80c9c3aa92360e0a5b66bc1cc83dd10d52b4ac not found: ID does not exist" Feb 16 22:31:54 crc kubenswrapper[4792]: E0216 22:31:54.029110 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:31:54 crc kubenswrapper[4792]: I0216 22:31:54.041992 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" path="/var/lib/kubelet/pods/69b8b4ee-338d-4ccc-8876-85d3fc8ce165/volumes" Feb 16 22:32:02 crc kubenswrapper[4792]: E0216 22:32:02.028992 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:32:07 crc kubenswrapper[4792]: E0216 22:32:07.028691 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:32:15 crc kubenswrapper[4792]: E0216 22:32:15.029421 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:32:21 crc kubenswrapper[4792]: E0216 22:32:21.029931 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:32:30 crc kubenswrapper[4792]: E0216 22:32:30.029584 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:32:33 crc kubenswrapper[4792]: E0216 22:32:33.030638 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:32:43 crc kubenswrapper[4792]: E0216 22:32:43.031899 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:32:46 crc kubenswrapper[4792]: E0216 22:32:46.027832 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:32:56 crc kubenswrapper[4792]: E0216 22:32:56.029820 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:32:58 crc kubenswrapper[4792]: E0216 22:32:58.035495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:33:09 crc kubenswrapper[4792]: E0216 22:33:09.029998 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:33:10 crc kubenswrapper[4792]: E0216 22:33:10.033957 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:33:20 crc kubenswrapper[4792]: E0216 22:33:20.030086 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:33:24 crc kubenswrapper[4792]: E0216 22:33:24.029209 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:33:32 crc kubenswrapper[4792]: E0216 22:33:32.031807 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:33:37 crc kubenswrapper[4792]: E0216 22:33:37.030401 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:33:47 crc kubenswrapper[4792]: E0216 22:33:47.028479 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:33:48 crc kubenswrapper[4792]: E0216 22:33:48.044394 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:33:52 crc kubenswrapper[4792]: I0216 22:33:52.141245 4792 generic.go:334] "Generic (PLEG): container finished" podID="3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" containerID="0eadbfa37b7edaccc7c38d49dd52e9ae8367f0774d9d2768a85c5fb232e29cc0" exitCode=2 Feb 16 22:33:52 crc kubenswrapper[4792]: I0216 22:33:52.141335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" event={"ID":"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba","Type":"ContainerDied","Data":"0eadbfa37b7edaccc7c38d49dd52e9ae8367f0774d9d2768a85c5fb232e29cc0"} Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.705111 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.727615 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrm9\" (UniqueName: \"kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9\") pod \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.727680 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory\") pod \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.732915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9" (OuterVolumeSpecName: "kube-api-access-jxrm9") pod "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" (UID: "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba"). InnerVolumeSpecName "kube-api-access-jxrm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.760854 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory" (OuterVolumeSpecName: "inventory") pod "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" (UID: "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.829994 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam\") pod \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\" (UID: \"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba\") " Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.832558 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxrm9\" (UniqueName: \"kubernetes.io/projected/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-kube-api-access-jxrm9\") on node \"crc\" DevicePath \"\"" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.832659 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.878131 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" (UID: "3b2e7368-cabe-42cf-8b3f-8e6b743e8bba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:33:53 crc kubenswrapper[4792]: I0216 22:33:53.935083 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b2e7368-cabe-42cf-8b3f-8e6b743e8bba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:33:54 crc kubenswrapper[4792]: I0216 22:33:54.170544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" event={"ID":"3b2e7368-cabe-42cf-8b3f-8e6b743e8bba","Type":"ContainerDied","Data":"db2de9886e4effe72c3ec73ee7873bdaa621c531afd07845e4f2f87ed48034c1"} Feb 16 22:33:54 crc kubenswrapper[4792]: I0216 22:33:54.170583 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db2de9886e4effe72c3ec73ee7873bdaa621c531afd07845e4f2f87ed48034c1" Feb 16 22:33:54 crc kubenswrapper[4792]: I0216 22:33:54.170620 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd" Feb 16 22:33:58 crc kubenswrapper[4792]: E0216 22:33:58.035285 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:34:00 crc kubenswrapper[4792]: E0216 22:34:00.030269 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:34:01 crc kubenswrapper[4792]: I0216 22:34:01.532782 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:34:01 crc kubenswrapper[4792]: I0216 22:34:01.533178 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.837553 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:03 crc kubenswrapper[4792]: E0216 22:34:03.838614 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="extract-content" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838633 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="extract-content" Feb 16 22:34:03 crc kubenswrapper[4792]: E0216 22:34:03.838677 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838686 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:34:03 crc kubenswrapper[4792]: E0216 22:34:03.838703 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="extract-utilities" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838710 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="extract-utilities" Feb 16 22:34:03 crc kubenswrapper[4792]: E0216 22:34:03.838724 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="registry-server" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838732 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="registry-server" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838965 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2e7368-cabe-42cf-8b3f-8e6b743e8bba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.838976 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8b4ee-338d-4ccc-8876-85d3fc8ce165" containerName="registry-server" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.841984 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.855951 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.996550 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57d5v\" (UniqueName: \"kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.996717 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:03 crc kubenswrapper[4792]: I0216 22:34:03.996935 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.098924 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.099053 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57d5v\" (UniqueName: \"kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.099155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.099398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.099700 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.122939 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57d5v\" (UniqueName: \"kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v\") pod \"redhat-operators-bvwwk\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.202781 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:04 crc kubenswrapper[4792]: I0216 22:34:04.735082 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:05 crc kubenswrapper[4792]: I0216 22:34:05.310556 4792 generic.go:334] "Generic (PLEG): container finished" podID="47878699-810d-4bbc-9796-3a705257b6b2" containerID="fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc" exitCode=0 Feb 16 22:34:05 crc kubenswrapper[4792]: I0216 22:34:05.310631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerDied","Data":"fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc"} Feb 16 22:34:05 crc kubenswrapper[4792]: I0216 22:34:05.310885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerStarted","Data":"5c8b3674e90003ae740501798d35e3eaae01b7c814aa2e608ab31f1d08de01e4"} Feb 16 22:34:06 crc kubenswrapper[4792]: I0216 22:34:06.322482 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerStarted","Data":"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c"} Feb 16 22:34:10 crc kubenswrapper[4792]: I0216 22:34:10.377774 4792 generic.go:334] "Generic (PLEG): container finished" podID="47878699-810d-4bbc-9796-3a705257b6b2" containerID="4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c" exitCode=0 Feb 16 22:34:10 crc kubenswrapper[4792]: I0216 22:34:10.378738 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerDied","Data":"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c"} Feb 16 22:34:10 crc kubenswrapper[4792]: I0216 22:34:10.382737 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:34:11 crc kubenswrapper[4792]: E0216 22:34:11.147109 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:34:11 crc kubenswrapper[4792]: E0216 22:34:11.147765 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:34:11 crc kubenswrapper[4792]: E0216 22:34:11.147915 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:34:11 crc kubenswrapper[4792]: E0216 22:34:11.149289 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:34:11 crc kubenswrapper[4792]: I0216 22:34:11.403663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerStarted","Data":"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7"} Feb 16 22:34:13 crc kubenswrapper[4792]: E0216 22:34:13.028191 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:34:14 crc kubenswrapper[4792]: I0216 22:34:14.203279 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:14 crc kubenswrapper[4792]: I0216 22:34:14.203992 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:15 crc kubenswrapper[4792]: I0216 22:34:15.256340 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bvwwk" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="registry-server" probeResult="failure" output=< Feb 16 22:34:15 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:34:15 crc kubenswrapper[4792]: > Feb 16 22:34:24 crc kubenswrapper[4792]: E0216 22:34:24.029204 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:34:24 crc kubenswrapper[4792]: I0216 22:34:24.050183 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bvwwk" podStartSLOduration=15.376187279 podStartE2EDuration="21.050164676s" podCreationTimestamp="2026-02-16 22:34:03 +0000 UTC" firstStartedPulling="2026-02-16 22:34:05.312560752 +0000 UTC m=+3377.965839643" lastFinishedPulling="2026-02-16 22:34:10.986538149 +0000 UTC m=+3383.639817040" observedRunningTime="2026-02-16 22:34:11.451039457 +0000 UTC m=+3384.104318348" watchObservedRunningTime="2026-02-16 22:34:24.050164676 +0000 UTC m=+3396.703443567" Feb 16 22:34:24 crc kubenswrapper[4792]: I0216 22:34:24.282834 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:24 crc kubenswrapper[4792]: I0216 22:34:24.374381 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:24 crc kubenswrapper[4792]: I0216 22:34:24.526985 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:25 crc kubenswrapper[4792]: I0216 22:34:25.632541 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bvwwk" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="registry-server" containerID="cri-o://0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7" gracePeriod=2 Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.228507 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.321032 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57d5v\" (UniqueName: \"kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v\") pod \"47878699-810d-4bbc-9796-3a705257b6b2\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.321205 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content\") pod \"47878699-810d-4bbc-9796-3a705257b6b2\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.327207 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities\") pod \"47878699-810d-4bbc-9796-3a705257b6b2\" (UID: \"47878699-810d-4bbc-9796-3a705257b6b2\") " Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.327959 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities" (OuterVolumeSpecName: "utilities") pod "47878699-810d-4bbc-9796-3a705257b6b2" (UID: "47878699-810d-4bbc-9796-3a705257b6b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.328390 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.339951 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v" (OuterVolumeSpecName: "kube-api-access-57d5v") pod "47878699-810d-4bbc-9796-3a705257b6b2" (UID: "47878699-810d-4bbc-9796-3a705257b6b2"). InnerVolumeSpecName "kube-api-access-57d5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.430571 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57d5v\" (UniqueName: \"kubernetes.io/projected/47878699-810d-4bbc-9796-3a705257b6b2-kube-api-access-57d5v\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.458932 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47878699-810d-4bbc-9796-3a705257b6b2" (UID: "47878699-810d-4bbc-9796-3a705257b6b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.532958 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47878699-810d-4bbc-9796-3a705257b6b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.645450 4792 generic.go:334] "Generic (PLEG): container finished" podID="47878699-810d-4bbc-9796-3a705257b6b2" containerID="0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7" exitCode=0 Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.645510 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerDied","Data":"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7"} Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.645574 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvwwk" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.645998 4792 scope.go:117] "RemoveContainer" containerID="0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.645838 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvwwk" event={"ID":"47878699-810d-4bbc-9796-3a705257b6b2","Type":"ContainerDied","Data":"5c8b3674e90003ae740501798d35e3eaae01b7c814aa2e608ab31f1d08de01e4"} Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.667352 4792 scope.go:117] "RemoveContainer" containerID="4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.692487 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.709036 4792 scope.go:117] "RemoveContainer" containerID="fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.711974 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bvwwk"] Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.749699 4792 scope.go:117] "RemoveContainer" containerID="0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7" Feb 16 22:34:26 crc kubenswrapper[4792]: E0216 22:34:26.750107 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7\": container with ID starting with 0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7 not found: ID does not exist" containerID="0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.750143 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7"} err="failed to get container status \"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7\": rpc error: code = NotFound desc = could not find container \"0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7\": container with ID starting with 0da1e464fc574cef3c6a01e02eb74ef0d9be8a728f06dd0067d6208aff7bd9a7 not found: ID does not exist" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.750168 4792 scope.go:117] "RemoveContainer" containerID="4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c" Feb 16 22:34:26 crc kubenswrapper[4792]: E0216 22:34:26.750367 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c\": container with ID starting with 4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c not found: ID does not exist" containerID="4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.750389 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c"} err="failed to get container status \"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c\": rpc error: code = NotFound desc = could not find container \"4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c\": container with ID starting with 4b3dc5ac804a4f143545a00968ec6da82ee2b1d7558dbe8f419a1eea0109be1c not found: ID does not exist" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.750401 4792 scope.go:117] "RemoveContainer" containerID="fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc" Feb 16 22:34:26 crc kubenswrapper[4792]: E0216 22:34:26.750704 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc\": container with ID starting with fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc not found: ID does not exist" containerID="fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc" Feb 16 22:34:26 crc kubenswrapper[4792]: I0216 22:34:26.750754 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc"} err="failed to get container status \"fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc\": rpc error: code = NotFound desc = could not find container \"fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc\": container with ID starting with fde0bd236b6fabb677f2f98ed7768236eb5980dc0da90307965d12471518b4fc not found: ID does not exist" Feb 16 22:34:27 crc kubenswrapper[4792]: E0216 22:34:27.158033 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:34:27 crc kubenswrapper[4792]: E0216 22:34:27.158092 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:34:27 crc kubenswrapper[4792]: E0216 22:34:27.158219 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:34:27 crc kubenswrapper[4792]: E0216 22:34:27.159423 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:34:28 crc kubenswrapper[4792]: I0216 22:34:28.037424 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47878699-810d-4bbc-9796-3a705257b6b2" path="/var/lib/kubelet/pods/47878699-810d-4bbc-9796-3a705257b6b2/volumes" Feb 16 22:34:31 crc kubenswrapper[4792]: I0216 22:34:31.532554 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:34:31 crc kubenswrapper[4792]: I0216 22:34:31.533322 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:34:39 crc kubenswrapper[4792]: E0216 22:34:39.028584 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:34:39 crc kubenswrapper[4792]: E0216 22:34:39.028771 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:34:54 crc kubenswrapper[4792]: E0216 22:34:54.029474 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:34:54 crc kubenswrapper[4792]: E0216 22:34:54.029496 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:35:01 crc kubenswrapper[4792]: I0216 22:35:01.532567 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:35:01 crc kubenswrapper[4792]: I0216 22:35:01.533181 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:35:01 crc kubenswrapper[4792]: I0216 22:35:01.533224 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:35:01 crc kubenswrapper[4792]: I0216 22:35:01.534149 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:35:01 crc kubenswrapper[4792]: I0216 22:35:01.534222 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d" gracePeriod=600 Feb 16 22:35:02 crc kubenswrapper[4792]: I0216 22:35:02.077481 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d" exitCode=0 Feb 16 22:35:02 crc kubenswrapper[4792]: I0216 22:35:02.077573 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d"} Feb 16 22:35:02 crc kubenswrapper[4792]: I0216 22:35:02.077955 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734"} Feb 16 22:35:02 crc kubenswrapper[4792]: I0216 22:35:02.077998 4792 scope.go:117] "RemoveContainer" containerID="151ee8a4f80c48a504f2b00d54c4aeac51e043bf81346326b94394ed6e0dbe5e" Feb 16 22:35:05 crc kubenswrapper[4792]: E0216 22:35:05.030835 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:35:08 crc kubenswrapper[4792]: E0216 22:35:08.045862 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.053672 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj"] Feb 16 22:35:11 crc kubenswrapper[4792]: E0216 22:35:11.054661 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="registry-server" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.054937 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="registry-server" Feb 16 22:35:11 crc kubenswrapper[4792]: E0216 22:35:11.054970 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="extract-content" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.054982 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="extract-content" Feb 16 22:35:11 crc kubenswrapper[4792]: E0216 22:35:11.055001 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="extract-utilities" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.055012 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="extract-utilities" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.055368 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="47878699-810d-4bbc-9796-3a705257b6b2" containerName="registry-server" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.056516 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.059349 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.059668 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.060471 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.066782 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.072814 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj"] Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.119070 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.119806 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.119928 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcct\" (UniqueName: \"kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.222143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.222320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.222372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcct\" (UniqueName: \"kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.228430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.229107 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.244318 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcct\" (UniqueName: \"kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k8djj\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.393515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:35:11 crc kubenswrapper[4792]: I0216 22:35:11.977857 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj"] Feb 16 22:35:12 crc kubenswrapper[4792]: I0216 22:35:12.217809 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" event={"ID":"1fd88c0f-2daa-4b0f-b372-141a953ab8b0","Type":"ContainerStarted","Data":"efc18fb7e51cd35057ffa8115e58fdf3dd4657b29d50c219973f611c40ed7825"} Feb 16 22:35:13 crc kubenswrapper[4792]: I0216 22:35:13.236863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" event={"ID":"1fd88c0f-2daa-4b0f-b372-141a953ab8b0","Type":"ContainerStarted","Data":"88fd22d2b8cd79c122f778d72c198142ec03ff956a4c779fccf1ea0aaf2d5267"} Feb 16 22:35:13 crc kubenswrapper[4792]: I0216 22:35:13.255042 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" podStartSLOduration=1.753045972 podStartE2EDuration="2.255018003s" podCreationTimestamp="2026-02-16 22:35:11 +0000 UTC" firstStartedPulling="2026-02-16 22:35:11.991172585 +0000 UTC m=+3444.644451476" lastFinishedPulling="2026-02-16 22:35:12.493144616 +0000 UTC m=+3445.146423507" observedRunningTime="2026-02-16 22:35:13.252249178 +0000 UTC m=+3445.905528089" watchObservedRunningTime="2026-02-16 22:35:13.255018003 +0000 UTC m=+3445.908296894" Feb 16 22:35:20 crc kubenswrapper[4792]: E0216 22:35:20.028701 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:35:20 crc kubenswrapper[4792]: E0216 22:35:20.028702 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:35:32 crc kubenswrapper[4792]: E0216 22:35:32.029557 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:35:33 crc kubenswrapper[4792]: E0216 22:35:33.029105 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:35:43 crc kubenswrapper[4792]: E0216 22:35:43.028979 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:35:46 crc kubenswrapper[4792]: E0216 22:35:46.027956 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:35:56 crc kubenswrapper[4792]: E0216 22:35:56.028511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:36:00 crc kubenswrapper[4792]: E0216 22:36:00.028396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:36:10 crc kubenswrapper[4792]: E0216 22:36:10.028558 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:36:14 crc kubenswrapper[4792]: E0216 22:36:14.029408 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:36:23 crc kubenswrapper[4792]: E0216 22:36:23.030674 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:36:28 crc kubenswrapper[4792]: E0216 22:36:28.036675 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:36:34 crc kubenswrapper[4792]: E0216 22:36:34.029924 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:36:39 crc kubenswrapper[4792]: E0216 22:36:39.030026 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:36:46 crc kubenswrapper[4792]: E0216 22:36:46.028645 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:36:50 crc kubenswrapper[4792]: E0216 22:36:50.029513 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:36:57 crc kubenswrapper[4792]: E0216 22:36:57.033303 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:01 crc kubenswrapper[4792]: I0216 22:37:01.532830 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:37:01 crc kubenswrapper[4792]: I0216 22:37:01.533301 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:37:05 crc kubenswrapper[4792]: E0216 22:37:05.029428 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:37:09 crc kubenswrapper[4792]: E0216 22:37:09.030268 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:16 crc kubenswrapper[4792]: E0216 22:37:16.030446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:37:20 crc kubenswrapper[4792]: E0216 22:37:20.028936 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.672685 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.675618 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.741793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.788404 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.788754 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8hj2\" (UniqueName: \"kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.788942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.891036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8hj2\" (UniqueName: \"kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.891177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.891255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.891732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.891754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:26 crc kubenswrapper[4792]: I0216 22:37:26.910495 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8hj2\" (UniqueName: \"kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2\") pod \"community-operators-b4t5g\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:27 crc kubenswrapper[4792]: I0216 22:37:27.041528 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:27 crc kubenswrapper[4792]: I0216 22:37:27.563012 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:28 crc kubenswrapper[4792]: I0216 22:37:28.106765 4792 generic.go:334] "Generic (PLEG): container finished" podID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerID="9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add" exitCode=0 Feb 16 22:37:28 crc kubenswrapper[4792]: I0216 22:37:28.106894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerDied","Data":"9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add"} Feb 16 22:37:28 crc kubenswrapper[4792]: I0216 22:37:28.106923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerStarted","Data":"a19f34dd94aaafebb1324fbca29d75b92ae19a717c49db9fc92afd6c04b190f6"} Feb 16 22:37:29 crc kubenswrapper[4792]: E0216 22:37:29.029084 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:37:29 crc kubenswrapper[4792]: I0216 22:37:29.129107 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerStarted","Data":"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3"} Feb 16 22:37:31 crc kubenswrapper[4792]: I0216 22:37:31.152947 4792 generic.go:334] "Generic (PLEG): container finished" podID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerID="8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3" exitCode=0 Feb 16 22:37:31 crc kubenswrapper[4792]: I0216 22:37:31.153033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerDied","Data":"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3"} Feb 16 22:37:31 crc kubenswrapper[4792]: I0216 22:37:31.532341 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:37:31 crc kubenswrapper[4792]: I0216 22:37:31.532409 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:37:32 crc kubenswrapper[4792]: I0216 22:37:32.165092 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerStarted","Data":"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56"} Feb 16 22:37:32 crc kubenswrapper[4792]: I0216 22:37:32.194180 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4t5g" podStartSLOduration=2.715262545 podStartE2EDuration="6.194158507s" podCreationTimestamp="2026-02-16 22:37:26 +0000 UTC" firstStartedPulling="2026-02-16 22:37:28.108806669 +0000 UTC m=+3580.762085570" lastFinishedPulling="2026-02-16 22:37:31.587702651 +0000 UTC m=+3584.240981532" observedRunningTime="2026-02-16 22:37:32.182174502 +0000 UTC m=+3584.835453413" watchObservedRunningTime="2026-02-16 22:37:32.194158507 +0000 UTC m=+3584.847437408" Feb 16 22:37:33 crc kubenswrapper[4792]: E0216 22:37:33.028115 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:37 crc kubenswrapper[4792]: I0216 22:37:37.041801 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:37 crc kubenswrapper[4792]: I0216 22:37:37.042373 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:37 crc kubenswrapper[4792]: I0216 22:37:37.089833 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:37 crc kubenswrapper[4792]: I0216 22:37:37.293348 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:37 crc kubenswrapper[4792]: I0216 22:37:37.339661 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.238808 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4t5g" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="registry-server" containerID="cri-o://0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56" gracePeriod=2 Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.753235 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.859078 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8hj2\" (UniqueName: \"kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2\") pod \"0806cdeb-2594-482d-93f1-144b9096b8e4\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.859229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities\") pod \"0806cdeb-2594-482d-93f1-144b9096b8e4\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.859302 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content\") pod \"0806cdeb-2594-482d-93f1-144b9096b8e4\" (UID: \"0806cdeb-2594-482d-93f1-144b9096b8e4\") " Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.860458 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities" (OuterVolumeSpecName: "utilities") pod "0806cdeb-2594-482d-93f1-144b9096b8e4" (UID: "0806cdeb-2594-482d-93f1-144b9096b8e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.868799 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2" (OuterVolumeSpecName: "kube-api-access-t8hj2") pod "0806cdeb-2594-482d-93f1-144b9096b8e4" (UID: "0806cdeb-2594-482d-93f1-144b9096b8e4"). InnerVolumeSpecName "kube-api-access-t8hj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.962041 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8hj2\" (UniqueName: \"kubernetes.io/projected/0806cdeb-2594-482d-93f1-144b9096b8e4-kube-api-access-t8hj2\") on node \"crc\" DevicePath \"\"" Feb 16 22:37:39 crc kubenswrapper[4792]: I0216 22:37:39.962341 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.229668 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0806cdeb-2594-482d-93f1-144b9096b8e4" (UID: "0806cdeb-2594-482d-93f1-144b9096b8e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.250064 4792 generic.go:334] "Generic (PLEG): container finished" podID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerID="0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56" exitCode=0 Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.250109 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerDied","Data":"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56"} Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.250139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4t5g" event={"ID":"0806cdeb-2594-482d-93f1-144b9096b8e4","Type":"ContainerDied","Data":"a19f34dd94aaafebb1324fbca29d75b92ae19a717c49db9fc92afd6c04b190f6"} Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.250156 4792 scope.go:117] "RemoveContainer" containerID="0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.250186 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4t5g" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.270476 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0806cdeb-2594-482d-93f1-144b9096b8e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.279890 4792 scope.go:117] "RemoveContainer" containerID="8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.284375 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.305049 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4t5g"] Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.307294 4792 scope.go:117] "RemoveContainer" containerID="9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.410839 4792 scope.go:117] "RemoveContainer" containerID="0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56" Feb 16 22:37:40 crc kubenswrapper[4792]: E0216 22:37:40.411488 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56\": container with ID starting with 0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56 not found: ID does not exist" containerID="0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.411530 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56"} err="failed to get container status \"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56\": rpc error: code = NotFound desc = could not find container \"0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56\": container with ID starting with 0b366827f7fdd96b54ce62fdc170d52cdc388d1e9e66423534e6c25279872a56 not found: ID does not exist" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.411557 4792 scope.go:117] "RemoveContainer" containerID="8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3" Feb 16 22:37:40 crc kubenswrapper[4792]: E0216 22:37:40.412003 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3\": container with ID starting with 8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3 not found: ID does not exist" containerID="8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.412031 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3"} err="failed to get container status \"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3\": rpc error: code = NotFound desc = could not find container \"8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3\": container with ID starting with 8c9446e3b47d434305374243f666ede397440804b269589dfb5e7bb3abc461d3 not found: ID does not exist" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.412048 4792 scope.go:117] "RemoveContainer" containerID="9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add" Feb 16 22:37:40 crc kubenswrapper[4792]: E0216 22:37:40.412391 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add\": container with ID starting with 9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add not found: ID does not exist" containerID="9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add" Feb 16 22:37:40 crc kubenswrapper[4792]: I0216 22:37:40.412439 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add"} err="failed to get container status \"9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add\": rpc error: code = NotFound desc = could not find container \"9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add\": container with ID starting with 9fd5dc2bf52c0fea093de7c74f074c93b7b98eeb74f856be73a387fd530f3add not found: ID does not exist" Feb 16 22:37:41 crc kubenswrapper[4792]: E0216 22:37:41.045039 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:37:42 crc kubenswrapper[4792]: I0216 22:37:42.047828 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" path="/var/lib/kubelet/pods/0806cdeb-2594-482d-93f1-144b9096b8e4/volumes" Feb 16 22:37:48 crc kubenswrapper[4792]: E0216 22:37:48.038046 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:53 crc kubenswrapper[4792]: E0216 22:37:53.028378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.806489 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:37:54 crc kubenswrapper[4792]: E0216 22:37:54.807761 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="extract-content" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.807785 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="extract-content" Feb 16 22:37:54 crc kubenswrapper[4792]: E0216 22:37:54.807849 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="extract-utilities" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.807864 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="extract-utilities" Feb 16 22:37:54 crc kubenswrapper[4792]: E0216 22:37:54.807903 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="registry-server" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.807918 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="registry-server" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.808325 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0806cdeb-2594-482d-93f1-144b9096b8e4" containerName="registry-server" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.811805 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.833931 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.862217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.862525 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkj8s\" (UniqueName: \"kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.862592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.965613 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkj8s\" (UniqueName: \"kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.965693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.965827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.966500 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.967222 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:54 crc kubenswrapper[4792]: I0216 22:37:54.985242 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkj8s\" (UniqueName: \"kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s\") pod \"redhat-marketplace-k5cvj\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:55 crc kubenswrapper[4792]: I0216 22:37:55.169335 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:37:55 crc kubenswrapper[4792]: I0216 22:37:55.732480 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:37:56 crc kubenswrapper[4792]: I0216 22:37:56.483610 4792 generic.go:334] "Generic (PLEG): container finished" podID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerID="d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be" exitCode=0 Feb 16 22:37:56 crc kubenswrapper[4792]: I0216 22:37:56.483879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerDied","Data":"d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be"} Feb 16 22:37:56 crc kubenswrapper[4792]: I0216 22:37:56.483924 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerStarted","Data":"a0d1a6c26f4d90eebb52c64341f34d502e4a49d9ad5386ec682a8bf2918c57af"} Feb 16 22:37:57 crc kubenswrapper[4792]: I0216 22:37:57.497637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerStarted","Data":"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e"} Feb 16 22:37:58 crc kubenswrapper[4792]: I0216 22:37:58.508717 4792 generic.go:334] "Generic (PLEG): container finished" podID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerID="26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e" exitCode=0 Feb 16 22:37:58 crc kubenswrapper[4792]: I0216 22:37:58.508761 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerDied","Data":"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e"} Feb 16 22:37:59 crc kubenswrapper[4792]: E0216 22:37:59.028176 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:37:59 crc kubenswrapper[4792]: I0216 22:37:59.520853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerStarted","Data":"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df"} Feb 16 22:37:59 crc kubenswrapper[4792]: I0216 22:37:59.541398 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5cvj" podStartSLOduration=3.134707198 podStartE2EDuration="5.541379202s" podCreationTimestamp="2026-02-16 22:37:54 +0000 UTC" firstStartedPulling="2026-02-16 22:37:56.485743382 +0000 UTC m=+3609.139022273" lastFinishedPulling="2026-02-16 22:37:58.892415386 +0000 UTC m=+3611.545694277" observedRunningTime="2026-02-16 22:37:59.538874994 +0000 UTC m=+3612.192153905" watchObservedRunningTime="2026-02-16 22:37:59.541379202 +0000 UTC m=+3612.194658103" Feb 16 22:38:01 crc kubenswrapper[4792]: I0216 22:38:01.532655 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:38:01 crc kubenswrapper[4792]: I0216 22:38:01.532980 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:38:01 crc kubenswrapper[4792]: I0216 22:38:01.533016 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:38:01 crc kubenswrapper[4792]: I0216 22:38:01.533938 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:38:01 crc kubenswrapper[4792]: I0216 22:38:01.533992 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" gracePeriod=600 Feb 16 22:38:01 crc kubenswrapper[4792]: E0216 22:38:01.661254 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:38:02 crc kubenswrapper[4792]: I0216 22:38:02.557741 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" exitCode=0 Feb 16 22:38:02 crc kubenswrapper[4792]: I0216 22:38:02.557975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734"} Feb 16 22:38:02 crc kubenswrapper[4792]: I0216 22:38:02.558238 4792 scope.go:117] "RemoveContainer" containerID="e26fd174f26573b69cc9e60a909a98d227aca1b022ab5ac5d85230e5f6cbc62d" Feb 16 22:38:02 crc kubenswrapper[4792]: I0216 22:38:02.559011 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:38:02 crc kubenswrapper[4792]: E0216 22:38:02.559373 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:38:05 crc kubenswrapper[4792]: E0216 22:38:05.028985 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:38:05 crc kubenswrapper[4792]: I0216 22:38:05.170114 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:05 crc kubenswrapper[4792]: I0216 22:38:05.170170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:05 crc kubenswrapper[4792]: I0216 22:38:05.237372 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:05 crc kubenswrapper[4792]: I0216 22:38:05.659292 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:05 crc kubenswrapper[4792]: I0216 22:38:05.743665 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:38:07 crc kubenswrapper[4792]: I0216 22:38:07.612969 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5cvj" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="registry-server" containerID="cri-o://0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df" gracePeriod=2 Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.199957 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.250300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities\") pod \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.250407 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content\") pod \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.250472 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkj8s\" (UniqueName: \"kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s\") pod \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\" (UID: \"8efc3e15-da51-472c-afd7-e664ddcfbf4c\") " Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.252712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities" (OuterVolumeSpecName: "utilities") pod "8efc3e15-da51-472c-afd7-e664ddcfbf4c" (UID: "8efc3e15-da51-472c-afd7-e664ddcfbf4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.258321 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s" (OuterVolumeSpecName: "kube-api-access-fkj8s") pod "8efc3e15-da51-472c-afd7-e664ddcfbf4c" (UID: "8efc3e15-da51-472c-afd7-e664ddcfbf4c"). InnerVolumeSpecName "kube-api-access-fkj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.282451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8efc3e15-da51-472c-afd7-e664ddcfbf4c" (UID: "8efc3e15-da51-472c-afd7-e664ddcfbf4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.353128 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkj8s\" (UniqueName: \"kubernetes.io/projected/8efc3e15-da51-472c-afd7-e664ddcfbf4c-kube-api-access-fkj8s\") on node \"crc\" DevicePath \"\"" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.353162 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.353172 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8efc3e15-da51-472c-afd7-e664ddcfbf4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.625895 4792 generic.go:334] "Generic (PLEG): container finished" podID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerID="0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df" exitCode=0 Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.625944 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5cvj" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.625963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerDied","Data":"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df"} Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.626863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5cvj" event={"ID":"8efc3e15-da51-472c-afd7-e664ddcfbf4c","Type":"ContainerDied","Data":"a0d1a6c26f4d90eebb52c64341f34d502e4a49d9ad5386ec682a8bf2918c57af"} Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.626903 4792 scope.go:117] "RemoveContainer" containerID="0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.651513 4792 scope.go:117] "RemoveContainer" containerID="26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.677451 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.697686 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5cvj"] Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.700329 4792 scope.go:117] "RemoveContainer" containerID="d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.755759 4792 scope.go:117] "RemoveContainer" containerID="0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df" Feb 16 22:38:08 crc kubenswrapper[4792]: E0216 22:38:08.756143 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df\": container with ID starting with 0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df not found: ID does not exist" containerID="0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.756179 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df"} err="failed to get container status \"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df\": rpc error: code = NotFound desc = could not find container \"0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df\": container with ID starting with 0b570af5fc773400fc5bb50e29885c74f6765eb3bfb732fc1021ba33af4ee2df not found: ID does not exist" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.756202 4792 scope.go:117] "RemoveContainer" containerID="26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e" Feb 16 22:38:08 crc kubenswrapper[4792]: E0216 22:38:08.756630 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e\": container with ID starting with 26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e not found: ID does not exist" containerID="26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.756662 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e"} err="failed to get container status \"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e\": rpc error: code = NotFound desc = could not find container \"26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e\": container with ID starting with 26fbf8a2711d7608da67b2015b82561cba07f3a531504247a8ea68c4cedce68e not found: ID does not exist" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.756681 4792 scope.go:117] "RemoveContainer" containerID="d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be" Feb 16 22:38:08 crc kubenswrapper[4792]: E0216 22:38:08.756997 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be\": container with ID starting with d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be not found: ID does not exist" containerID="d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be" Feb 16 22:38:08 crc kubenswrapper[4792]: I0216 22:38:08.757028 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be"} err="failed to get container status \"d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be\": rpc error: code = NotFound desc = could not find container \"d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be\": container with ID starting with d05cb93809f9350e039474c3037102cbf08732abb2b9c194a25b5533422134be not found: ID does not exist" Feb 16 22:38:10 crc kubenswrapper[4792]: E0216 22:38:10.048500 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:38:10 crc kubenswrapper[4792]: I0216 22:38:10.094297 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" path="/var/lib/kubelet/pods/8efc3e15-da51-472c-afd7-e664ddcfbf4c/volumes" Feb 16 22:38:16 crc kubenswrapper[4792]: I0216 22:38:16.026802 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:38:16 crc kubenswrapper[4792]: E0216 22:38:16.027866 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:38:17 crc kubenswrapper[4792]: E0216 22:38:17.028427 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:38:22 crc kubenswrapper[4792]: E0216 22:38:22.029343 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:38:27 crc kubenswrapper[4792]: I0216 22:38:27.027105 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:38:27 crc kubenswrapper[4792]: E0216 22:38:27.028053 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:38:32 crc kubenswrapper[4792]: E0216 22:38:32.030426 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:38:36 crc kubenswrapper[4792]: E0216 22:38:36.031013 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:38:38 crc kubenswrapper[4792]: I0216 22:38:38.033893 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:38:38 crc kubenswrapper[4792]: E0216 22:38:38.034734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:38:46 crc kubenswrapper[4792]: E0216 22:38:46.030105 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:38:51 crc kubenswrapper[4792]: E0216 22:38:51.028850 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:38:52 crc kubenswrapper[4792]: I0216 22:38:52.027842 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:38:52 crc kubenswrapper[4792]: E0216 22:38:52.028878 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:39:00 crc kubenswrapper[4792]: E0216 22:39:00.029619 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:39:04 crc kubenswrapper[4792]: E0216 22:39:04.029518 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:39:06 crc kubenswrapper[4792]: I0216 22:39:06.029215 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:39:06 crc kubenswrapper[4792]: E0216 22:39:06.030190 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:39:15 crc kubenswrapper[4792]: E0216 22:39:15.031159 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:39:18 crc kubenswrapper[4792]: I0216 22:39:18.041995 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:39:18 crc kubenswrapper[4792]: E0216 22:39:18.043530 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:39:18 crc kubenswrapper[4792]: I0216 22:39:18.044765 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:39:18 crc kubenswrapper[4792]: E0216 22:39:18.181522 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:39:18 crc kubenswrapper[4792]: E0216 22:39:18.181630 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:39:18 crc kubenswrapper[4792]: E0216 22:39:18.181846 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:39:18 crc kubenswrapper[4792]: E0216 22:39:18.183813 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:39:28 crc kubenswrapper[4792]: E0216 22:39:28.145911 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:39:28 crc kubenswrapper[4792]: E0216 22:39:28.146477 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:39:28 crc kubenswrapper[4792]: E0216 22:39:28.146655 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:39:28 crc kubenswrapper[4792]: E0216 22:39:28.148625 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:39:31 crc kubenswrapper[4792]: I0216 22:39:31.026848 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:39:31 crc kubenswrapper[4792]: E0216 22:39:31.027656 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:39:32 crc kubenswrapper[4792]: E0216 22:39:32.030320 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:39:42 crc kubenswrapper[4792]: E0216 22:39:42.029500 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:39:44 crc kubenswrapper[4792]: I0216 22:39:44.053565 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:39:44 crc kubenswrapper[4792]: E0216 22:39:44.054322 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:39:47 crc kubenswrapper[4792]: E0216 22:39:47.028067 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:39:54 crc kubenswrapper[4792]: E0216 22:39:54.031692 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:39:57 crc kubenswrapper[4792]: I0216 22:39:57.026050 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:39:57 crc kubenswrapper[4792]: E0216 22:39:57.026873 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:40:00 crc kubenswrapper[4792]: E0216 22:40:00.028942 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:40:08 crc kubenswrapper[4792]: E0216 22:40:08.040843 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:40:10 crc kubenswrapper[4792]: I0216 22:40:10.026156 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:40:10 crc kubenswrapper[4792]: E0216 22:40:10.027024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:40:12 crc kubenswrapper[4792]: E0216 22:40:12.062301 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:40:21 crc kubenswrapper[4792]: I0216 22:40:21.026055 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:40:21 crc kubenswrapper[4792]: E0216 22:40:21.026837 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:40:23 crc kubenswrapper[4792]: E0216 22:40:23.028523 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:40:24 crc kubenswrapper[4792]: E0216 22:40:24.029836 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:40:34 crc kubenswrapper[4792]: I0216 22:40:34.027244 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:40:34 crc kubenswrapper[4792]: E0216 22:40:34.028318 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:40:35 crc kubenswrapper[4792]: E0216 22:40:35.030052 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:40:37 crc kubenswrapper[4792]: E0216 22:40:37.027833 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:40:46 crc kubenswrapper[4792]: I0216 22:40:46.026457 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:40:46 crc kubenswrapper[4792]: E0216 22:40:46.027225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:40:50 crc kubenswrapper[4792]: E0216 22:40:50.029404 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:40:50 crc kubenswrapper[4792]: E0216 22:40:50.029415 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:40:57 crc kubenswrapper[4792]: I0216 22:40:57.026529 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:40:57 crc kubenswrapper[4792]: E0216 22:40:57.027352 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:41:02 crc kubenswrapper[4792]: E0216 22:41:02.028638 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:41:03 crc kubenswrapper[4792]: E0216 22:41:03.030673 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:41:11 crc kubenswrapper[4792]: I0216 22:41:11.027058 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:41:11 crc kubenswrapper[4792]: E0216 22:41:11.027626 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:41:17 crc kubenswrapper[4792]: E0216 22:41:17.029931 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:41:17 crc kubenswrapper[4792]: E0216 22:41:17.029936 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:41:26 crc kubenswrapper[4792]: I0216 22:41:26.026961 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:41:26 crc kubenswrapper[4792]: E0216 22:41:26.028300 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:41:28 crc kubenswrapper[4792]: I0216 22:41:28.288340 4792 generic.go:334] "Generic (PLEG): container finished" podID="1fd88c0f-2daa-4b0f-b372-141a953ab8b0" containerID="88fd22d2b8cd79c122f778d72c198142ec03ff956a4c779fccf1ea0aaf2d5267" exitCode=2 Feb 16 22:41:28 crc kubenswrapper[4792]: I0216 22:41:28.288435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" event={"ID":"1fd88c0f-2daa-4b0f-b372-141a953ab8b0","Type":"ContainerDied","Data":"88fd22d2b8cd79c122f778d72c198142ec03ff956a4c779fccf1ea0aaf2d5267"} Feb 16 22:41:29 crc kubenswrapper[4792]: E0216 22:41:29.030577 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:41:29 crc kubenswrapper[4792]: E0216 22:41:29.030665 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:41:29 crc kubenswrapper[4792]: I0216 22:41:29.882275 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.031826 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory\") pod \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.031936 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcct\" (UniqueName: \"kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct\") pod \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.032152 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam\") pod \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\" (UID: \"1fd88c0f-2daa-4b0f-b372-141a953ab8b0\") " Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.039923 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct" (OuterVolumeSpecName: "kube-api-access-lwcct") pod "1fd88c0f-2daa-4b0f-b372-141a953ab8b0" (UID: "1fd88c0f-2daa-4b0f-b372-141a953ab8b0"). InnerVolumeSpecName "kube-api-access-lwcct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.064797 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory" (OuterVolumeSpecName: "inventory") pod "1fd88c0f-2daa-4b0f-b372-141a953ab8b0" (UID: "1fd88c0f-2daa-4b0f-b372-141a953ab8b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.069346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1fd88c0f-2daa-4b0f-b372-141a953ab8b0" (UID: "1fd88c0f-2daa-4b0f-b372-141a953ab8b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.135171 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.135408 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwcct\" (UniqueName: \"kubernetes.io/projected/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-kube-api-access-lwcct\") on node \"crc\" DevicePath \"\"" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.135498 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd88c0f-2daa-4b0f-b372-141a953ab8b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.321135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" event={"ID":"1fd88c0f-2daa-4b0f-b372-141a953ab8b0","Type":"ContainerDied","Data":"efc18fb7e51cd35057ffa8115e58fdf3dd4657b29d50c219973f611c40ed7825"} Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.321181 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efc18fb7e51cd35057ffa8115e58fdf3dd4657b29d50c219973f611c40ed7825" Feb 16 22:41:30 crc kubenswrapper[4792]: I0216 22:41:30.321203 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k8djj" Feb 16 22:41:37 crc kubenswrapper[4792]: I0216 22:41:37.026898 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:41:37 crc kubenswrapper[4792]: E0216 22:41:37.028092 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:41:41 crc kubenswrapper[4792]: E0216 22:41:41.029753 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:41:43 crc kubenswrapper[4792]: E0216 22:41:43.028701 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:41:51 crc kubenswrapper[4792]: I0216 22:41:51.026878 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:41:51 crc kubenswrapper[4792]: E0216 22:41:51.027916 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:41:54 crc kubenswrapper[4792]: E0216 22:41:54.031214 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:41:55 crc kubenswrapper[4792]: E0216 22:41:55.028580 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:03 crc kubenswrapper[4792]: I0216 22:42:03.026749 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:42:03 crc kubenswrapper[4792]: E0216 22:42:03.028122 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:42:05 crc kubenswrapper[4792]: E0216 22:42:05.029894 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:42:07 crc kubenswrapper[4792]: E0216 22:42:07.031159 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:15 crc kubenswrapper[4792]: I0216 22:42:15.028131 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:42:15 crc kubenswrapper[4792]: E0216 22:42:15.029723 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:42:17 crc kubenswrapper[4792]: E0216 22:42:17.030869 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:42:20 crc kubenswrapper[4792]: E0216 22:42:20.030392 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:28 crc kubenswrapper[4792]: I0216 22:42:28.045133 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:42:28 crc kubenswrapper[4792]: E0216 22:42:28.046106 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:42:30 crc kubenswrapper[4792]: E0216 22:42:30.032583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:42:32 crc kubenswrapper[4792]: E0216 22:42:32.029075 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:42 crc kubenswrapper[4792]: I0216 22:42:42.026911 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:42:42 crc kubenswrapper[4792]: E0216 22:42:42.027895 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:42:43 crc kubenswrapper[4792]: E0216 22:42:43.028882 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:45 crc kubenswrapper[4792]: E0216 22:42:45.027961 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.026833 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.028173 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.029502 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.052088 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.052688 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="extract-utilities" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.052734 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="extract-utilities" Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.052775 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="registry-server" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.052785 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="registry-server" Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.052796 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd88c0f-2daa-4b0f-b372-141a953ab8b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.052808 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd88c0f-2daa-4b0f-b372-141a953ab8b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:42:54 crc kubenswrapper[4792]: E0216 22:42:54.052838 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="extract-content" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.052849 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="extract-content" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.053162 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd88c0f-2daa-4b0f-b372-141a953ab8b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.053237 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efc3e15-da51-472c-afd7-e664ddcfbf4c" containerName="registry-server" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.055391 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.072922 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.201424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.201995 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.202067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wc87\" (UniqueName: \"kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.305554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.306315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wc87\" (UniqueName: \"kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.306475 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.306465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.307283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.337562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wc87\" (UniqueName: \"kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87\") pod \"certified-operators-sxd9c\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.381682 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:42:54 crc kubenswrapper[4792]: I0216 22:42:54.955555 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:42:55 crc kubenswrapper[4792]: I0216 22:42:55.446937 4792 generic.go:334] "Generic (PLEG): container finished" podID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerID="910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd" exitCode=0 Feb 16 22:42:55 crc kubenswrapper[4792]: I0216 22:42:55.447033 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerDied","Data":"910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd"} Feb 16 22:42:55 crc kubenswrapper[4792]: I0216 22:42:55.447258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerStarted","Data":"283cab836a64926b5abf3393d32214d27bad3da7e264967f473594e89a575e97"} Feb 16 22:42:57 crc kubenswrapper[4792]: I0216 22:42:57.476696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerStarted","Data":"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded"} Feb 16 22:42:58 crc kubenswrapper[4792]: I0216 22:42:58.492486 4792 generic.go:334] "Generic (PLEG): container finished" podID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerID="401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded" exitCode=0 Feb 16 22:42:58 crc kubenswrapper[4792]: I0216 22:42:58.492593 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerDied","Data":"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded"} Feb 16 22:42:59 crc kubenswrapper[4792]: I0216 22:42:59.511404 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerStarted","Data":"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5"} Feb 16 22:42:59 crc kubenswrapper[4792]: I0216 22:42:59.540429 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sxd9c" podStartSLOduration=2.079066588 podStartE2EDuration="5.540404906s" podCreationTimestamp="2026-02-16 22:42:54 +0000 UTC" firstStartedPulling="2026-02-16 22:42:55.451167477 +0000 UTC m=+3908.104446408" lastFinishedPulling="2026-02-16 22:42:58.912505805 +0000 UTC m=+3911.565784726" observedRunningTime="2026-02-16 22:42:59.539841981 +0000 UTC m=+3912.193120892" watchObservedRunningTime="2026-02-16 22:42:59.540404906 +0000 UTC m=+3912.193683807" Feb 16 22:43:00 crc kubenswrapper[4792]: E0216 22:43:00.027809 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:43:04 crc kubenswrapper[4792]: I0216 22:43:04.381747 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:04 crc kubenswrapper[4792]: I0216 22:43:04.383276 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:05 crc kubenswrapper[4792]: I0216 22:43:05.451797 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sxd9c" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="registry-server" probeResult="failure" output=< Feb 16 22:43:05 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:43:05 crc kubenswrapper[4792]: > Feb 16 22:43:06 crc kubenswrapper[4792]: I0216 22:43:06.025966 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:43:06 crc kubenswrapper[4792]: I0216 22:43:06.596082 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132"} Feb 16 22:43:08 crc kubenswrapper[4792]: E0216 22:43:08.039587 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:43:14 crc kubenswrapper[4792]: I0216 22:43:14.438647 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:14 crc kubenswrapper[4792]: I0216 22:43:14.505977 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:14 crc kubenswrapper[4792]: I0216 22:43:14.692178 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:43:15 crc kubenswrapper[4792]: E0216 22:43:15.028540 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:43:15 crc kubenswrapper[4792]: I0216 22:43:15.696183 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sxd9c" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="registry-server" containerID="cri-o://fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5" gracePeriod=2 Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.227519 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.309078 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wc87\" (UniqueName: \"kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87\") pod \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.309717 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content\") pod \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.309808 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities\") pod \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\" (UID: \"41d4f63a-43b2-4dd5-8837-e5c53c95312a\") " Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.310482 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities" (OuterVolumeSpecName: "utilities") pod "41d4f63a-43b2-4dd5-8837-e5c53c95312a" (UID: "41d4f63a-43b2-4dd5-8837-e5c53c95312a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.311248 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.326442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87" (OuterVolumeSpecName: "kube-api-access-6wc87") pod "41d4f63a-43b2-4dd5-8837-e5c53c95312a" (UID: "41d4f63a-43b2-4dd5-8837-e5c53c95312a"). InnerVolumeSpecName "kube-api-access-6wc87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.382478 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41d4f63a-43b2-4dd5-8837-e5c53c95312a" (UID: "41d4f63a-43b2-4dd5-8837-e5c53c95312a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.413570 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d4f63a-43b2-4dd5-8837-e5c53c95312a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.413632 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wc87\" (UniqueName: \"kubernetes.io/projected/41d4f63a-43b2-4dd5-8837-e5c53c95312a-kube-api-access-6wc87\") on node \"crc\" DevicePath \"\"" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.706708 4792 generic.go:334] "Generic (PLEG): container finished" podID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerID="fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5" exitCode=0 Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.706749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerDied","Data":"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5"} Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.706776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxd9c" event={"ID":"41d4f63a-43b2-4dd5-8837-e5c53c95312a","Type":"ContainerDied","Data":"283cab836a64926b5abf3393d32214d27bad3da7e264967f473594e89a575e97"} Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.706792 4792 scope.go:117] "RemoveContainer" containerID="fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.706796 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxd9c" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.727093 4792 scope.go:117] "RemoveContainer" containerID="401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.747070 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.757138 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sxd9c"] Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.768566 4792 scope.go:117] "RemoveContainer" containerID="910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.815421 4792 scope.go:117] "RemoveContainer" containerID="fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5" Feb 16 22:43:16 crc kubenswrapper[4792]: E0216 22:43:16.816960 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5\": container with ID starting with fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5 not found: ID does not exist" containerID="fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.817002 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5"} err="failed to get container status \"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5\": rpc error: code = NotFound desc = could not find container \"fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5\": container with ID starting with fcf09c13b225b7e662373854976b89faa9a9fb0685c4d3a11c4530a9670a1fe5 not found: ID does not exist" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.817029 4792 scope.go:117] "RemoveContainer" containerID="401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded" Feb 16 22:43:16 crc kubenswrapper[4792]: E0216 22:43:16.817486 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded\": container with ID starting with 401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded not found: ID does not exist" containerID="401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.817546 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded"} err="failed to get container status \"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded\": rpc error: code = NotFound desc = could not find container \"401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded\": container with ID starting with 401fb68246714ceb172de4380e600eaadebfef0be846b364ffc44b3d2eb2aded not found: ID does not exist" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.817573 4792 scope.go:117] "RemoveContainer" containerID="910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd" Feb 16 22:43:16 crc kubenswrapper[4792]: E0216 22:43:16.817875 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd\": container with ID starting with 910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd not found: ID does not exist" containerID="910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd" Feb 16 22:43:16 crc kubenswrapper[4792]: I0216 22:43:16.817897 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd"} err="failed to get container status \"910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd\": rpc error: code = NotFound desc = could not find container \"910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd\": container with ID starting with 910deddb1975f17e93efa7bd129efb8f110b499b3495633fd15934c07a15bccd not found: ID does not exist" Feb 16 22:43:18 crc kubenswrapper[4792]: I0216 22:43:18.044767 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" path="/var/lib/kubelet/pods/41d4f63a-43b2-4dd5-8837-e5c53c95312a/volumes" Feb 16 22:43:20 crc kubenswrapper[4792]: E0216 22:43:20.029489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:43:30 crc kubenswrapper[4792]: E0216 22:43:30.029684 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:43:32 crc kubenswrapper[4792]: E0216 22:43:32.031265 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:43:43 crc kubenswrapper[4792]: E0216 22:43:43.028152 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:43:45 crc kubenswrapper[4792]: E0216 22:43:45.029258 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:43:54 crc kubenswrapper[4792]: E0216 22:43:54.029512 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:43:58 crc kubenswrapper[4792]: E0216 22:43:58.039317 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:44:05 crc kubenswrapper[4792]: E0216 22:44:05.029222 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.044958 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p"] Feb 16 22:44:07 crc kubenswrapper[4792]: E0216 22:44:07.046136 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="extract-utilities" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.046158 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="extract-utilities" Feb 16 22:44:07 crc kubenswrapper[4792]: E0216 22:44:07.046178 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="registry-server" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.046188 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="registry-server" Feb 16 22:44:07 crc kubenswrapper[4792]: E0216 22:44:07.046239 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="extract-content" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.046249 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="extract-content" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.046637 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d4f63a-43b2-4dd5-8837-e5c53c95312a" containerName="registry-server" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.048028 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.050733 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.051065 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.056427 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.058504 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.063764 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p"] Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.133930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.134200 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qzl\" (UniqueName: \"kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.134272 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.236768 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qzl\" (UniqueName: \"kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.236896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.237104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.251855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.253559 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.269899 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qzl\" (UniqueName: \"kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:07 crc kubenswrapper[4792]: I0216 22:44:07.397956 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:44:08 crc kubenswrapper[4792]: I0216 22:44:08.077685 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p"] Feb 16 22:44:08 crc kubenswrapper[4792]: I0216 22:44:08.406523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" event={"ID":"e500e093-7b90-49a9-ae41-03f88648baa6","Type":"ContainerStarted","Data":"61baf9f0461e8ccd6b63fce4cb65ac73b0e56195ae0b67e1615d84e05c1aa557"} Feb 16 22:44:09 crc kubenswrapper[4792]: I0216 22:44:09.422313 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" event={"ID":"e500e093-7b90-49a9-ae41-03f88648baa6","Type":"ContainerStarted","Data":"8ad05a4c6a7e43f7bde744a82ca8fe8f5cf943fc1eca0e3b277e5648148f77a2"} Feb 16 22:44:12 crc kubenswrapper[4792]: E0216 22:44:12.029414 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:44:17 crc kubenswrapper[4792]: E0216 22:44:17.030710 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:44:27 crc kubenswrapper[4792]: E0216 22:44:27.030763 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:44:32 crc kubenswrapper[4792]: I0216 22:44:32.029095 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:44:32 crc kubenswrapper[4792]: E0216 22:44:32.149723 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:44:32 crc kubenswrapper[4792]: E0216 22:44:32.149833 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:44:32 crc kubenswrapper[4792]: E0216 22:44:32.150087 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:44:32 crc kubenswrapper[4792]: E0216 22:44:32.151665 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:44:42 crc kubenswrapper[4792]: E0216 22:44:42.119083 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:44:42 crc kubenswrapper[4792]: E0216 22:44:42.119641 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:44:42 crc kubenswrapper[4792]: E0216 22:44:42.119774 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:44:42 crc kubenswrapper[4792]: E0216 22:44:42.120978 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:44:43 crc kubenswrapper[4792]: E0216 22:44:43.028014 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:44:43 crc kubenswrapper[4792]: I0216 22:44:43.049216 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" podStartSLOduration=35.614820382 podStartE2EDuration="36.04919629s" podCreationTimestamp="2026-02-16 22:44:07 +0000 UTC" firstStartedPulling="2026-02-16 22:44:08.066173004 +0000 UTC m=+3980.719451895" lastFinishedPulling="2026-02-16 22:44:08.500548882 +0000 UTC m=+3981.153827803" observedRunningTime="2026-02-16 22:44:09.447582239 +0000 UTC m=+3982.100861170" watchObservedRunningTime="2026-02-16 22:44:43.04919629 +0000 UTC m=+4015.702475191" Feb 16 22:44:56 crc kubenswrapper[4792]: E0216 22:44:56.029191 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:44:57 crc kubenswrapper[4792]: E0216 22:44:57.029371 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.232133 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt"] Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.235705 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.238047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.238299 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54d7c\" (UniqueName: \"kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.238517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.239102 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.239505 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.245325 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt"] Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.340392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.340480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54d7c\" (UniqueName: \"kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.340533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.341472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.354228 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.358138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54d7c\" (UniqueName: \"kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c\") pod \"collect-profiles-29521365-gt6qt\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:00 crc kubenswrapper[4792]: I0216 22:45:00.568266 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:01 crc kubenswrapper[4792]: I0216 22:45:01.123965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt"] Feb 16 22:45:02 crc kubenswrapper[4792]: I0216 22:45:02.086786 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8fa525f-8751-4ea3-8ea5-b88b067dddfb" containerID="a60eb2f67f2d660fdd81faca2bfb7b4c31c426a5b16c1756f6522a92a59909e3" exitCode=0 Feb 16 22:45:02 crc kubenswrapper[4792]: I0216 22:45:02.087183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" event={"ID":"d8fa525f-8751-4ea3-8ea5-b88b067dddfb","Type":"ContainerDied","Data":"a60eb2f67f2d660fdd81faca2bfb7b4c31c426a5b16c1756f6522a92a59909e3"} Feb 16 22:45:02 crc kubenswrapper[4792]: I0216 22:45:02.087254 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" event={"ID":"d8fa525f-8751-4ea3-8ea5-b88b067dddfb","Type":"ContainerStarted","Data":"c59e841dca953150201b7c937631be21bdc3ab8935bdda7a7e2a449f30feb237"} Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.516527 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.627315 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume\") pod \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.627460 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume\") pod \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.627712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54d7c\" (UniqueName: \"kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c\") pod \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\" (UID: \"d8fa525f-8751-4ea3-8ea5-b88b067dddfb\") " Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.629314 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8fa525f-8751-4ea3-8ea5-b88b067dddfb" (UID: "d8fa525f-8751-4ea3-8ea5-b88b067dddfb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.641743 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8fa525f-8751-4ea3-8ea5-b88b067dddfb" (UID: "d8fa525f-8751-4ea3-8ea5-b88b067dddfb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.646277 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c" (OuterVolumeSpecName: "kube-api-access-54d7c") pod "d8fa525f-8751-4ea3-8ea5-b88b067dddfb" (UID: "d8fa525f-8751-4ea3-8ea5-b88b067dddfb"). InnerVolumeSpecName "kube-api-access-54d7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.730205 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.730238 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:45:03 crc kubenswrapper[4792]: I0216 22:45:03.730248 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54d7c\" (UniqueName: \"kubernetes.io/projected/d8fa525f-8751-4ea3-8ea5-b88b067dddfb-kube-api-access-54d7c\") on node \"crc\" DevicePath \"\"" Feb 16 22:45:04 crc kubenswrapper[4792]: I0216 22:45:04.117534 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" event={"ID":"d8fa525f-8751-4ea3-8ea5-b88b067dddfb","Type":"ContainerDied","Data":"c59e841dca953150201b7c937631be21bdc3ab8935bdda7a7e2a449f30feb237"} Feb 16 22:45:04 crc kubenswrapper[4792]: I0216 22:45:04.117630 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c59e841dca953150201b7c937631be21bdc3ab8935bdda7a7e2a449f30feb237" Feb 16 22:45:04 crc kubenswrapper[4792]: I0216 22:45:04.117663 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-gt6qt" Feb 16 22:45:04 crc kubenswrapper[4792]: I0216 22:45:04.631822 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4"] Feb 16 22:45:04 crc kubenswrapper[4792]: I0216 22:45:04.644998 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-8zfz4"] Feb 16 22:45:06 crc kubenswrapper[4792]: I0216 22:45:06.048422 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea9f2da-4123-4e53-a53f-f760412371e5" path="/var/lib/kubelet/pods/dea9f2da-4123-4e53-a53f-f760412371e5/volumes" Feb 16 22:45:08 crc kubenswrapper[4792]: E0216 22:45:08.052059 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:45:10 crc kubenswrapper[4792]: E0216 22:45:10.028703 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:45:23 crc kubenswrapper[4792]: E0216 22:45:23.030654 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:45:24 crc kubenswrapper[4792]: E0216 22:45:24.030468 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:45:31 crc kubenswrapper[4792]: I0216 22:45:31.532519 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:45:31 crc kubenswrapper[4792]: I0216 22:45:31.533314 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:45:38 crc kubenswrapper[4792]: E0216 22:45:38.040178 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:45:38 crc kubenswrapper[4792]: E0216 22:45:38.040196 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:45:50 crc kubenswrapper[4792]: E0216 22:45:50.029975 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:45:53 crc kubenswrapper[4792]: E0216 22:45:53.029173 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:46:01 crc kubenswrapper[4792]: I0216 22:46:01.532407 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:46:01 crc kubenswrapper[4792]: I0216 22:46:01.533014 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:46:01 crc kubenswrapper[4792]: I0216 22:46:01.702476 4792 scope.go:117] "RemoveContainer" containerID="78bd8eccfde02c14fc4ff2962cf71485078c080333df8a80d2d4dbde974c22cc" Feb 16 22:46:05 crc kubenswrapper[4792]: E0216 22:46:05.029192 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:46:06 crc kubenswrapper[4792]: E0216 22:46:06.028353 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:46:17 crc kubenswrapper[4792]: E0216 22:46:17.029466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:46:20 crc kubenswrapper[4792]: E0216 22:46:20.029767 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:46:28 crc kubenswrapper[4792]: E0216 22:46:28.045373 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:46:31 crc kubenswrapper[4792]: I0216 22:46:31.532985 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:46:31 crc kubenswrapper[4792]: I0216 22:46:31.533625 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:46:31 crc kubenswrapper[4792]: I0216 22:46:31.533673 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:46:31 crc kubenswrapper[4792]: I0216 22:46:31.534774 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:46:31 crc kubenswrapper[4792]: I0216 22:46:31.534843 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132" gracePeriod=600 Feb 16 22:46:32 crc kubenswrapper[4792]: I0216 22:46:32.363688 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132" exitCode=0 Feb 16 22:46:32 crc kubenswrapper[4792]: I0216 22:46:32.363781 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132"} Feb 16 22:46:32 crc kubenswrapper[4792]: I0216 22:46:32.364312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8"} Feb 16 22:46:32 crc kubenswrapper[4792]: I0216 22:46:32.364386 4792 scope.go:117] "RemoveContainer" containerID="19d2b4ea1340d7d1a0f5000bd3e29a26b27eed51cd50c3ebc1865fa4bf9bb734" Feb 16 22:46:33 crc kubenswrapper[4792]: E0216 22:46:33.029769 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:46:43 crc kubenswrapper[4792]: E0216 22:46:43.029858 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:46:45 crc kubenswrapper[4792]: E0216 22:46:45.028503 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:46:55 crc kubenswrapper[4792]: E0216 22:46:55.028873 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:47:00 crc kubenswrapper[4792]: E0216 22:47:00.029735 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:47:09 crc kubenswrapper[4792]: E0216 22:47:09.029788 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:47:15 crc kubenswrapper[4792]: E0216 22:47:15.029527 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:47:21 crc kubenswrapper[4792]: E0216 22:47:21.151129 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:47:29 crc kubenswrapper[4792]: E0216 22:47:29.032371 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:47:33 crc kubenswrapper[4792]: E0216 22:47:33.029686 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:47:42 crc kubenswrapper[4792]: E0216 22:47:42.029986 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:47:46 crc kubenswrapper[4792]: E0216 22:47:46.030130 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.007730 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:47:48 crc kubenswrapper[4792]: E0216 22:47:48.008899 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fa525f-8751-4ea3-8ea5-b88b067dddfb" containerName="collect-profiles" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.008920 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fa525f-8751-4ea3-8ea5-b88b067dddfb" containerName="collect-profiles" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.009321 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fa525f-8751-4ea3-8ea5-b88b067dddfb" containerName="collect-profiles" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.012161 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.071900 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.124690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prmgd\" (UniqueName: \"kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.124929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.125038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.226999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prmgd\" (UniqueName: \"kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.227508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.227624 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.228095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.231549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.256506 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prmgd\" (UniqueName: \"kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd\") pod \"community-operators-7rzqg\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.371081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:48 crc kubenswrapper[4792]: I0216 22:47:48.916953 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:47:49 crc kubenswrapper[4792]: I0216 22:47:49.522496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerStarted","Data":"026cb45195d35da45b6acf62ef752615b3959aeac947559c7b4fa9dd17d6d035"} Feb 16 22:47:50 crc kubenswrapper[4792]: I0216 22:47:50.539185 4792 generic.go:334] "Generic (PLEG): container finished" podID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerID="bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d" exitCode=0 Feb 16 22:47:50 crc kubenswrapper[4792]: I0216 22:47:50.539265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerDied","Data":"bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d"} Feb 16 22:47:52 crc kubenswrapper[4792]: I0216 22:47:52.567731 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerStarted","Data":"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971"} Feb 16 22:47:54 crc kubenswrapper[4792]: I0216 22:47:54.603845 4792 generic.go:334] "Generic (PLEG): container finished" podID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerID="60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971" exitCode=0 Feb 16 22:47:54 crc kubenswrapper[4792]: I0216 22:47:54.603900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerDied","Data":"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971"} Feb 16 22:47:55 crc kubenswrapper[4792]: I0216 22:47:55.623683 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerStarted","Data":"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf"} Feb 16 22:47:55 crc kubenswrapper[4792]: I0216 22:47:55.673456 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7rzqg" podStartSLOduration=4.145221361 podStartE2EDuration="8.673426922s" podCreationTimestamp="2026-02-16 22:47:47 +0000 UTC" firstStartedPulling="2026-02-16 22:47:50.543452395 +0000 UTC m=+4203.196731286" lastFinishedPulling="2026-02-16 22:47:55.071657926 +0000 UTC m=+4207.724936847" observedRunningTime="2026-02-16 22:47:55.646361872 +0000 UTC m=+4208.299640773" watchObservedRunningTime="2026-02-16 22:47:55.673426922 +0000 UTC m=+4208.326705833" Feb 16 22:47:56 crc kubenswrapper[4792]: E0216 22:47:56.029223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:47:58 crc kubenswrapper[4792]: I0216 22:47:58.372150 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:58 crc kubenswrapper[4792]: I0216 22:47:58.372376 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:47:59 crc kubenswrapper[4792]: I0216 22:47:59.414360 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7rzqg" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="registry-server" probeResult="failure" output=< Feb 16 22:47:59 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:47:59 crc kubenswrapper[4792]: > Feb 16 22:48:01 crc kubenswrapper[4792]: E0216 22:48:01.031193 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:48:08 crc kubenswrapper[4792]: I0216 22:48:08.459870 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:48:08 crc kubenswrapper[4792]: I0216 22:48:08.556805 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:48:08 crc kubenswrapper[4792]: I0216 22:48:08.715229 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:48:09 crc kubenswrapper[4792]: I0216 22:48:09.808552 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7rzqg" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="registry-server" containerID="cri-o://1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf" gracePeriod=2 Feb 16 22:48:10 crc kubenswrapper[4792]: E0216 22:48:10.028934 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.394339 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.518241 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content\") pod \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.518428 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities\") pod \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.518454 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prmgd\" (UniqueName: \"kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd\") pod \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\" (UID: \"ad75611a-9fc7-4239-8dcc-ae7b91ff7781\") " Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.519495 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities" (OuterVolumeSpecName: "utilities") pod "ad75611a-9fc7-4239-8dcc-ae7b91ff7781" (UID: "ad75611a-9fc7-4239-8dcc-ae7b91ff7781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.525865 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd" (OuterVolumeSpecName: "kube-api-access-prmgd") pod "ad75611a-9fc7-4239-8dcc-ae7b91ff7781" (UID: "ad75611a-9fc7-4239-8dcc-ae7b91ff7781"). InnerVolumeSpecName "kube-api-access-prmgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.576215 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad75611a-9fc7-4239-8dcc-ae7b91ff7781" (UID: "ad75611a-9fc7-4239-8dcc-ae7b91ff7781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.621353 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.621612 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.621685 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prmgd\" (UniqueName: \"kubernetes.io/projected/ad75611a-9fc7-4239-8dcc-ae7b91ff7781-kube-api-access-prmgd\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.823023 4792 generic.go:334] "Generic (PLEG): container finished" podID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerID="1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf" exitCode=0 Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.823072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerDied","Data":"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf"} Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.823112 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7rzqg" event={"ID":"ad75611a-9fc7-4239-8dcc-ae7b91ff7781","Type":"ContainerDied","Data":"026cb45195d35da45b6acf62ef752615b3959aeac947559c7b4fa9dd17d6d035"} Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.823134 4792 scope.go:117] "RemoveContainer" containerID="1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.823191 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7rzqg" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.864851 4792 scope.go:117] "RemoveContainer" containerID="60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.889647 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.904234 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7rzqg"] Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.907237 4792 scope.go:117] "RemoveContainer" containerID="bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.992679 4792 scope.go:117] "RemoveContainer" containerID="1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf" Feb 16 22:48:10 crc kubenswrapper[4792]: E0216 22:48:10.993248 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf\": container with ID starting with 1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf not found: ID does not exist" containerID="1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.993317 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf"} err="failed to get container status \"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf\": rpc error: code = NotFound desc = could not find container \"1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf\": container with ID starting with 1f7af6bc01cded457a295686757b1b447c8109b90fa5f07fd450e8907bdca4bf not found: ID does not exist" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.993363 4792 scope.go:117] "RemoveContainer" containerID="60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971" Feb 16 22:48:10 crc kubenswrapper[4792]: E0216 22:48:10.993998 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971\": container with ID starting with 60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971 not found: ID does not exist" containerID="60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.994056 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971"} err="failed to get container status \"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971\": rpc error: code = NotFound desc = could not find container \"60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971\": container with ID starting with 60e302d1b0e0697ea86a2632152914c329f832c6a902498d6bd5daf968f3a971 not found: ID does not exist" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.994095 4792 scope.go:117] "RemoveContainer" containerID="bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d" Feb 16 22:48:10 crc kubenswrapper[4792]: E0216 22:48:10.994695 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d\": container with ID starting with bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d not found: ID does not exist" containerID="bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d" Feb 16 22:48:10 crc kubenswrapper[4792]: I0216 22:48:10.994789 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d"} err="failed to get container status \"bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d\": rpc error: code = NotFound desc = could not find container \"bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d\": container with ID starting with bfaba8e39b5d68aab3fa9d5db0bac68b4de5a4a8774d244e69803832d2a29f0d not found: ID does not exist" Feb 16 22:48:12 crc kubenswrapper[4792]: E0216 22:48:12.028959 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:48:12 crc kubenswrapper[4792]: I0216 22:48:12.042028 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" path="/var/lib/kubelet/pods/ad75611a-9fc7-4239-8dcc-ae7b91ff7781/volumes" Feb 16 22:48:21 crc kubenswrapper[4792]: I0216 22:48:21.764559 4792 trace.go:236] Trace[439266831]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (16-Feb-2026 22:48:20.199) (total time: 1564ms): Feb 16 22:48:21 crc kubenswrapper[4792]: Trace[439266831]: [1.564556549s] [1.564556549s] END Feb 16 22:48:23 crc kubenswrapper[4792]: E0216 22:48:23.029047 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:48:26 crc kubenswrapper[4792]: E0216 22:48:26.032812 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:48:31 crc kubenswrapper[4792]: I0216 22:48:31.532565 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:48:31 crc kubenswrapper[4792]: I0216 22:48:31.533180 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:48:35 crc kubenswrapper[4792]: E0216 22:48:35.029585 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:48:39 crc kubenswrapper[4792]: E0216 22:48:39.028248 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:48:47 crc kubenswrapper[4792]: E0216 22:48:47.042964 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.259235 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:48:47 crc kubenswrapper[4792]: E0216 22:48:47.259908 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="extract-utilities" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.259926 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="extract-utilities" Feb 16 22:48:47 crc kubenswrapper[4792]: E0216 22:48:47.259992 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="extract-content" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.260004 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="extract-content" Feb 16 22:48:47 crc kubenswrapper[4792]: E0216 22:48:47.260030 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="registry-server" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.260040 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="registry-server" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.260373 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad75611a-9fc7-4239-8dcc-ae7b91ff7781" containerName="registry-server" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.262893 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.269391 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.370024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.370444 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkf9w\" (UniqueName: \"kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.370801 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.473214 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkf9w\" (UniqueName: \"kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.473283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.473431 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.473901 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.474457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.492362 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkf9w\" (UniqueName: \"kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w\") pod \"redhat-marketplace-gn5f5\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:47 crc kubenswrapper[4792]: I0216 22:48:47.643433 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:48 crc kubenswrapper[4792]: I0216 22:48:48.148919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:48:48 crc kubenswrapper[4792]: I0216 22:48:48.616076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerStarted","Data":"99ab5ef8731adb53867d3fc868b2dfa0b0c963d320102d1cb9be11abc212bfa4"} Feb 16 22:48:49 crc kubenswrapper[4792]: I0216 22:48:49.630664 4792 generic.go:334] "Generic (PLEG): container finished" podID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerID="254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87" exitCode=0 Feb 16 22:48:49 crc kubenswrapper[4792]: I0216 22:48:49.630847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerDied","Data":"254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87"} Feb 16 22:48:50 crc kubenswrapper[4792]: I0216 22:48:50.646889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerStarted","Data":"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660"} Feb 16 22:48:51 crc kubenswrapper[4792]: I0216 22:48:51.677512 4792 generic.go:334] "Generic (PLEG): container finished" podID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerID="1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660" exitCode=0 Feb 16 22:48:51 crc kubenswrapper[4792]: I0216 22:48:51.677740 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerDied","Data":"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660"} Feb 16 22:48:52 crc kubenswrapper[4792]: I0216 22:48:52.692156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerStarted","Data":"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6"} Feb 16 22:48:52 crc kubenswrapper[4792]: I0216 22:48:52.720258 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gn5f5" podStartSLOduration=3.269925155 podStartE2EDuration="5.720225347s" podCreationTimestamp="2026-02-16 22:48:47 +0000 UTC" firstStartedPulling="2026-02-16 22:48:49.63676569 +0000 UTC m=+4262.290044601" lastFinishedPulling="2026-02-16 22:48:52.087065902 +0000 UTC m=+4264.740344793" observedRunningTime="2026-02-16 22:48:52.716251309 +0000 UTC m=+4265.369530210" watchObservedRunningTime="2026-02-16 22:48:52.720225347 +0000 UTC m=+4265.373504268" Feb 16 22:48:53 crc kubenswrapper[4792]: E0216 22:48:53.027324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:48:57 crc kubenswrapper[4792]: I0216 22:48:57.644333 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:57 crc kubenswrapper[4792]: I0216 22:48:57.645126 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:57 crc kubenswrapper[4792]: I0216 22:48:57.705423 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:57 crc kubenswrapper[4792]: I0216 22:48:57.900170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:48:57 crc kubenswrapper[4792]: I0216 22:48:57.993077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:48:59 crc kubenswrapper[4792]: I0216 22:48:59.776902 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gn5f5" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="registry-server" containerID="cri-o://fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6" gracePeriod=2 Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.345798 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.520256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content\") pod \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.520356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities\") pod \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.520628 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkf9w\" (UniqueName: \"kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w\") pod \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\" (UID: \"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1\") " Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.521403 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities" (OuterVolumeSpecName: "utilities") pod "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" (UID: "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.527331 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w" (OuterVolumeSpecName: "kube-api-access-vkf9w") pod "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" (UID: "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1"). InnerVolumeSpecName "kube-api-access-vkf9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.545143 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" (UID: "5ea112e9-e8d9-475c-ae22-1f1a3e7929b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.623265 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkf9w\" (UniqueName: \"kubernetes.io/projected/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-kube-api-access-vkf9w\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.623300 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.623309 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.790094 4792 generic.go:334] "Generic (PLEG): container finished" podID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerID="fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6" exitCode=0 Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.790142 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerDied","Data":"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6"} Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.790160 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn5f5" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.790177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn5f5" event={"ID":"5ea112e9-e8d9-475c-ae22-1f1a3e7929b1","Type":"ContainerDied","Data":"99ab5ef8731adb53867d3fc868b2dfa0b0c963d320102d1cb9be11abc212bfa4"} Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.790195 4792 scope.go:117] "RemoveContainer" containerID="fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.833264 4792 scope.go:117] "RemoveContainer" containerID="1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.840483 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.857147 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn5f5"] Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.857943 4792 scope.go:117] "RemoveContainer" containerID="254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.933756 4792 scope.go:117] "RemoveContainer" containerID="fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6" Feb 16 22:49:00 crc kubenswrapper[4792]: E0216 22:49:00.934254 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6\": container with ID starting with fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6 not found: ID does not exist" containerID="fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.934294 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6"} err="failed to get container status \"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6\": rpc error: code = NotFound desc = could not find container \"fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6\": container with ID starting with fc2d532d775a9e452a4aeed408f5626cb6ac9930bccfd9433d2ef3acd56091f6 not found: ID does not exist" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.934325 4792 scope.go:117] "RemoveContainer" containerID="1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660" Feb 16 22:49:00 crc kubenswrapper[4792]: E0216 22:49:00.934956 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660\": container with ID starting with 1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660 not found: ID does not exist" containerID="1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.934987 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660"} err="failed to get container status \"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660\": rpc error: code = NotFound desc = could not find container \"1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660\": container with ID starting with 1dee7d1c7458806c174b043859760f9302967a74ece1b09fef146983f94f1660 not found: ID does not exist" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.935005 4792 scope.go:117] "RemoveContainer" containerID="254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87" Feb 16 22:49:00 crc kubenswrapper[4792]: E0216 22:49:00.935357 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87\": container with ID starting with 254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87 not found: ID does not exist" containerID="254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87" Feb 16 22:49:00 crc kubenswrapper[4792]: I0216 22:49:00.935386 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87"} err="failed to get container status \"254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87\": rpc error: code = NotFound desc = could not find container \"254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87\": container with ID starting with 254a6b95c53594b9691073a77aa66734b2bf32933be94f5203c96d9dafecac87 not found: ID does not exist" Feb 16 22:49:01 crc kubenswrapper[4792]: E0216 22:49:01.028053 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:49:01 crc kubenswrapper[4792]: I0216 22:49:01.532113 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:49:01 crc kubenswrapper[4792]: I0216 22:49:01.532200 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:49:02 crc kubenswrapper[4792]: I0216 22:49:02.041860 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" path="/var/lib/kubelet/pods/5ea112e9-e8d9-475c-ae22-1f1a3e7929b1/volumes" Feb 16 22:49:05 crc kubenswrapper[4792]: E0216 22:49:05.029037 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:49:16 crc kubenswrapper[4792]: E0216 22:49:16.029327 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:49:17 crc kubenswrapper[4792]: E0216 22:49:17.028390 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:49:29 crc kubenswrapper[4792]: E0216 22:49:29.029612 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:49:30 crc kubenswrapper[4792]: E0216 22:49:30.029252 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:49:31 crc kubenswrapper[4792]: I0216 22:49:31.531972 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:49:31 crc kubenswrapper[4792]: I0216 22:49:31.532366 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:49:31 crc kubenswrapper[4792]: I0216 22:49:31.532412 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:49:31 crc kubenswrapper[4792]: I0216 22:49:31.533374 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:49:31 crc kubenswrapper[4792]: I0216 22:49:31.533420 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" gracePeriod=600 Feb 16 22:49:31 crc kubenswrapper[4792]: E0216 22:49:31.663040 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:49:32 crc kubenswrapper[4792]: I0216 22:49:32.215904 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" exitCode=0 Feb 16 22:49:32 crc kubenswrapper[4792]: I0216 22:49:32.215963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8"} Feb 16 22:49:32 crc kubenswrapper[4792]: I0216 22:49:32.216017 4792 scope.go:117] "RemoveContainer" containerID="92754b101b9b849ee7f8e791ffcbd306c751d625847390e5be5b1e87c7e7f132" Feb 16 22:49:32 crc kubenswrapper[4792]: I0216 22:49:32.217392 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:49:32 crc kubenswrapper[4792]: E0216 22:49:32.217748 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:49:40 crc kubenswrapper[4792]: I0216 22:49:40.028967 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:49:40 crc kubenswrapper[4792]: E0216 22:49:40.154016 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:49:40 crc kubenswrapper[4792]: E0216 22:49:40.154089 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:49:40 crc kubenswrapper[4792]: E0216 22:49:40.154250 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:49:40 crc kubenswrapper[4792]: E0216 22:49:40.155531 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:49:43 crc kubenswrapper[4792]: E0216 22:49:43.149449 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:49:43 crc kubenswrapper[4792]: E0216 22:49:43.149836 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:49:43 crc kubenswrapper[4792]: E0216 22:49:43.149998 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:49:43 crc kubenswrapper[4792]: E0216 22:49:43.151387 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:49:47 crc kubenswrapper[4792]: I0216 22:49:47.026541 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:49:47 crc kubenswrapper[4792]: E0216 22:49:47.027730 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:49:51 crc kubenswrapper[4792]: E0216 22:49:51.031403 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:49:57 crc kubenswrapper[4792]: E0216 22:49:57.030391 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:49:58 crc kubenswrapper[4792]: I0216 22:49:58.033385 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:49:58 crc kubenswrapper[4792]: E0216 22:49:58.034222 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:50:06 crc kubenswrapper[4792]: E0216 22:50:06.031881 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:50:10 crc kubenswrapper[4792]: I0216 22:50:10.027212 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:50:10 crc kubenswrapper[4792]: E0216 22:50:10.030474 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:50:11 crc kubenswrapper[4792]: E0216 22:50:11.029925 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:50:18 crc kubenswrapper[4792]: E0216 22:50:18.044785 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:50:21 crc kubenswrapper[4792]: I0216 22:50:21.027012 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:50:21 crc kubenswrapper[4792]: E0216 22:50:21.028062 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:50:26 crc kubenswrapper[4792]: E0216 22:50:26.031263 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:50:31 crc kubenswrapper[4792]: E0216 22:50:31.030525 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:50:33 crc kubenswrapper[4792]: I0216 22:50:33.102338 4792 generic.go:334] "Generic (PLEG): container finished" podID="e500e093-7b90-49a9-ae41-03f88648baa6" containerID="8ad05a4c6a7e43f7bde744a82ca8fe8f5cf943fc1eca0e3b277e5648148f77a2" exitCode=2 Feb 16 22:50:33 crc kubenswrapper[4792]: I0216 22:50:33.102427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" event={"ID":"e500e093-7b90-49a9-ae41-03f88648baa6","Type":"ContainerDied","Data":"8ad05a4c6a7e43f7bde744a82ca8fe8f5cf943fc1eca0e3b277e5648148f77a2"} Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.026994 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:50:34 crc kubenswrapper[4792]: E0216 22:50:34.027719 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.679822 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.795882 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam\") pod \"e500e093-7b90-49a9-ae41-03f88648baa6\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.796165 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory\") pod \"e500e093-7b90-49a9-ae41-03f88648baa6\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.796224 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2qzl\" (UniqueName: \"kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl\") pod \"e500e093-7b90-49a9-ae41-03f88648baa6\" (UID: \"e500e093-7b90-49a9-ae41-03f88648baa6\") " Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.803373 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl" (OuterVolumeSpecName: "kube-api-access-w2qzl") pod "e500e093-7b90-49a9-ae41-03f88648baa6" (UID: "e500e093-7b90-49a9-ae41-03f88648baa6"). InnerVolumeSpecName "kube-api-access-w2qzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.841342 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory" (OuterVolumeSpecName: "inventory") pod "e500e093-7b90-49a9-ae41-03f88648baa6" (UID: "e500e093-7b90-49a9-ae41-03f88648baa6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.862216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e500e093-7b90-49a9-ae41-03f88648baa6" (UID: "e500e093-7b90-49a9-ae41-03f88648baa6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.898716 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.898758 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e500e093-7b90-49a9-ae41-03f88648baa6-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:34 crc kubenswrapper[4792]: I0216 22:50:34.898772 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2qzl\" (UniqueName: \"kubernetes.io/projected/e500e093-7b90-49a9-ae41-03f88648baa6-kube-api-access-w2qzl\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:35 crc kubenswrapper[4792]: I0216 22:50:35.136647 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" event={"ID":"e500e093-7b90-49a9-ae41-03f88648baa6","Type":"ContainerDied","Data":"61baf9f0461e8ccd6b63fce4cb65ac73b0e56195ae0b67e1615d84e05c1aa557"} Feb 16 22:50:35 crc kubenswrapper[4792]: I0216 22:50:35.136969 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61baf9f0461e8ccd6b63fce4cb65ac73b0e56195ae0b67e1615d84e05c1aa557" Feb 16 22:50:35 crc kubenswrapper[4792]: I0216 22:50:35.137163 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p" Feb 16 22:50:38 crc kubenswrapper[4792]: E0216 22:50:38.039652 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:50:43 crc kubenswrapper[4792]: E0216 22:50:43.028796 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:50:49 crc kubenswrapper[4792]: I0216 22:50:49.027164 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:50:49 crc kubenswrapper[4792]: E0216 22:50:49.028495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:50:51 crc kubenswrapper[4792]: E0216 22:50:51.029100 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:50:55 crc kubenswrapper[4792]: E0216 22:50:55.029530 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:51:01 crc kubenswrapper[4792]: I0216 22:51:01.029319 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:51:01 crc kubenswrapper[4792]: E0216 22:51:01.030394 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:51:05 crc kubenswrapper[4792]: E0216 22:51:05.029920 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:51:08 crc kubenswrapper[4792]: E0216 22:51:08.047076 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:51:14 crc kubenswrapper[4792]: I0216 22:51:14.026866 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:51:14 crc kubenswrapper[4792]: E0216 22:51:14.027538 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:51:19 crc kubenswrapper[4792]: E0216 22:51:19.028767 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:51:22 crc kubenswrapper[4792]: E0216 22:51:22.028572 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:51:26 crc kubenswrapper[4792]: I0216 22:51:26.026951 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:51:26 crc kubenswrapper[4792]: E0216 22:51:26.027522 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:51:31 crc kubenswrapper[4792]: E0216 22:51:31.030010 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:51:36 crc kubenswrapper[4792]: E0216 22:51:36.028893 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:51:37 crc kubenswrapper[4792]: I0216 22:51:37.026766 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:51:37 crc kubenswrapper[4792]: E0216 22:51:37.027281 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:51:45 crc kubenswrapper[4792]: E0216 22:51:45.029324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:51:49 crc kubenswrapper[4792]: I0216 22:51:49.027814 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:51:49 crc kubenswrapper[4792]: E0216 22:51:49.028367 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:51:50 crc kubenswrapper[4792]: E0216 22:51:50.029256 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:51:59 crc kubenswrapper[4792]: E0216 22:51:59.028230 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:52:03 crc kubenswrapper[4792]: I0216 22:52:03.029344 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:52:03 crc kubenswrapper[4792]: E0216 22:52:03.041970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:52:04 crc kubenswrapper[4792]: E0216 22:52:04.029034 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:52:11 crc kubenswrapper[4792]: E0216 22:52:11.027994 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.083880 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:12 crc kubenswrapper[4792]: E0216 22:52:12.084869 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="extract-utilities" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.084893 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="extract-utilities" Feb 16 22:52:12 crc kubenswrapper[4792]: E0216 22:52:12.084924 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="registry-server" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.084935 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="registry-server" Feb 16 22:52:12 crc kubenswrapper[4792]: E0216 22:52:12.084968 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="extract-content" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.084979 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="extract-content" Feb 16 22:52:12 crc kubenswrapper[4792]: E0216 22:52:12.085021 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500e093-7b90-49a9-ae41-03f88648baa6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.085031 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500e093-7b90-49a9-ae41-03f88648baa6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.085349 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e500e093-7b90-49a9-ae41-03f88648baa6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.085380 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea112e9-e8d9-475c-ae22-1f1a3e7929b1" containerName="registry-server" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.087894 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.098242 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.111646 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q787f\" (UniqueName: \"kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.111703 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.111838 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.216362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.216731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q787f\" (UniqueName: \"kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.216825 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.217459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.218282 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.236697 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q787f\" (UniqueName: \"kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f\") pod \"redhat-operators-97bk2\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.428153 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:12 crc kubenswrapper[4792]: I0216 22:52:12.959292 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:13 crc kubenswrapper[4792]: I0216 22:52:13.373981 4792 generic.go:334] "Generic (PLEG): container finished" podID="e765ef87-0cfd-4132-b145-335b30c102e4" containerID="7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431" exitCode=0 Feb 16 22:52:13 crc kubenswrapper[4792]: I0216 22:52:13.374074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerDied","Data":"7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431"} Feb 16 22:52:13 crc kubenswrapper[4792]: I0216 22:52:13.374332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerStarted","Data":"ceca8baac92e4ca6e3a691d180ab329872fd1503b43bca1bf1f4c8f0afc72eeb"} Feb 16 22:52:14 crc kubenswrapper[4792]: I0216 22:52:14.390663 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerStarted","Data":"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129"} Feb 16 22:52:15 crc kubenswrapper[4792]: E0216 22:52:15.028405 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:52:17 crc kubenswrapper[4792]: I0216 22:52:17.027382 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:52:17 crc kubenswrapper[4792]: E0216 22:52:17.029080 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:52:19 crc kubenswrapper[4792]: I0216 22:52:19.452080 4792 generic.go:334] "Generic (PLEG): container finished" podID="e765ef87-0cfd-4132-b145-335b30c102e4" containerID="fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129" exitCode=0 Feb 16 22:52:19 crc kubenswrapper[4792]: I0216 22:52:19.452152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerDied","Data":"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129"} Feb 16 22:52:21 crc kubenswrapper[4792]: I0216 22:52:21.488490 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerStarted","Data":"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621"} Feb 16 22:52:22 crc kubenswrapper[4792]: I0216 22:52:22.429043 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:22 crc kubenswrapper[4792]: I0216 22:52:22.429126 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:23 crc kubenswrapper[4792]: I0216 22:52:23.492069 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97bk2" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" probeResult="failure" output=< Feb 16 22:52:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:52:23 crc kubenswrapper[4792]: > Feb 16 22:52:25 crc kubenswrapper[4792]: E0216 22:52:25.028881 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:52:28 crc kubenswrapper[4792]: E0216 22:52:28.042957 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:52:30 crc kubenswrapper[4792]: I0216 22:52:30.027245 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:52:30 crc kubenswrapper[4792]: E0216 22:52:30.027582 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:52:33 crc kubenswrapper[4792]: I0216 22:52:33.474680 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97bk2" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" probeResult="failure" output=< Feb 16 22:52:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:52:33 crc kubenswrapper[4792]: > Feb 16 22:52:37 crc kubenswrapper[4792]: E0216 22:52:37.031777 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:52:41 crc kubenswrapper[4792]: E0216 22:52:41.029108 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:52:43 crc kubenswrapper[4792]: I0216 22:52:43.519259 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97bk2" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" probeResult="failure" output=< Feb 16 22:52:43 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:52:43 crc kubenswrapper[4792]: > Feb 16 22:52:45 crc kubenswrapper[4792]: I0216 22:52:45.027835 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:52:45 crc kubenswrapper[4792]: E0216 22:52:45.028562 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:52:48 crc kubenswrapper[4792]: E0216 22:52:48.042289 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:52:52 crc kubenswrapper[4792]: E0216 22:52:52.033815 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:52:52 crc kubenswrapper[4792]: I0216 22:52:52.519595 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:52 crc kubenswrapper[4792]: I0216 22:52:52.574419 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97bk2" podStartSLOduration=34.004471509 podStartE2EDuration="40.574368093s" podCreationTimestamp="2026-02-16 22:52:12 +0000 UTC" firstStartedPulling="2026-02-16 22:52:13.377107879 +0000 UTC m=+4466.030386770" lastFinishedPulling="2026-02-16 22:52:19.947004463 +0000 UTC m=+4472.600283354" observedRunningTime="2026-02-16 22:52:21.511151365 +0000 UTC m=+4474.164430286" watchObservedRunningTime="2026-02-16 22:52:52.574368093 +0000 UTC m=+4505.227646994" Feb 16 22:52:52 crc kubenswrapper[4792]: I0216 22:52:52.605256 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:52 crc kubenswrapper[4792]: I0216 22:52:52.764346 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:53 crc kubenswrapper[4792]: I0216 22:52:53.891166 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97bk2" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" containerID="cri-o://749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621" gracePeriod=2 Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.482139 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.598325 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q787f\" (UniqueName: \"kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f\") pod \"e765ef87-0cfd-4132-b145-335b30c102e4\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.598661 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities\") pod \"e765ef87-0cfd-4132-b145-335b30c102e4\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.598741 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content\") pod \"e765ef87-0cfd-4132-b145-335b30c102e4\" (UID: \"e765ef87-0cfd-4132-b145-335b30c102e4\") " Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.599615 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities" (OuterVolumeSpecName: "utilities") pod "e765ef87-0cfd-4132-b145-335b30c102e4" (UID: "e765ef87-0cfd-4132-b145-335b30c102e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.606395 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f" (OuterVolumeSpecName: "kube-api-access-q787f") pod "e765ef87-0cfd-4132-b145-335b30c102e4" (UID: "e765ef87-0cfd-4132-b145-335b30c102e4"). InnerVolumeSpecName "kube-api-access-q787f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.701903 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q787f\" (UniqueName: \"kubernetes.io/projected/e765ef87-0cfd-4132-b145-335b30c102e4-kube-api-access-q787f\") on node \"crc\" DevicePath \"\"" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.701936 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.743261 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e765ef87-0cfd-4132-b145-335b30c102e4" (UID: "e765ef87-0cfd-4132-b145-335b30c102e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.804003 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e765ef87-0cfd-4132-b145-335b30c102e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.905125 4792 generic.go:334] "Generic (PLEG): container finished" podID="e765ef87-0cfd-4132-b145-335b30c102e4" containerID="749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621" exitCode=0 Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.905185 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerDied","Data":"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621"} Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.905225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97bk2" event={"ID":"e765ef87-0cfd-4132-b145-335b30c102e4","Type":"ContainerDied","Data":"ceca8baac92e4ca6e3a691d180ab329872fd1503b43bca1bf1f4c8f0afc72eeb"} Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.905254 4792 scope.go:117] "RemoveContainer" containerID="749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.905189 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97bk2" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.929331 4792 scope.go:117] "RemoveContainer" containerID="fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.963385 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.966453 4792 scope.go:117] "RemoveContainer" containerID="7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431" Feb 16 22:52:54 crc kubenswrapper[4792]: I0216 22:52:54.984553 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97bk2"] Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.017470 4792 scope.go:117] "RemoveContainer" containerID="749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621" Feb 16 22:52:55 crc kubenswrapper[4792]: E0216 22:52:55.017935 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621\": container with ID starting with 749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621 not found: ID does not exist" containerID="749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621" Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.017976 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621"} err="failed to get container status \"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621\": rpc error: code = NotFound desc = could not find container \"749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621\": container with ID starting with 749140a89ea3ce30c5d32235000c0f65471dd0938e34a74851aa73658978e621 not found: ID does not exist" Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.018004 4792 scope.go:117] "RemoveContainer" containerID="fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129" Feb 16 22:52:55 crc kubenswrapper[4792]: E0216 22:52:55.018302 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129\": container with ID starting with fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129 not found: ID does not exist" containerID="fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129" Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.018326 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129"} err="failed to get container status \"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129\": rpc error: code = NotFound desc = could not find container \"fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129\": container with ID starting with fd0c41bc5cee7885e980883557df845cdd8f60c670117294de81bd460ffd5129 not found: ID does not exist" Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.018343 4792 scope.go:117] "RemoveContainer" containerID="7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431" Feb 16 22:52:55 crc kubenswrapper[4792]: E0216 22:52:55.018576 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431\": container with ID starting with 7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431 not found: ID does not exist" containerID="7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431" Feb 16 22:52:55 crc kubenswrapper[4792]: I0216 22:52:55.018627 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431"} err="failed to get container status \"7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431\": rpc error: code = NotFound desc = could not find container \"7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431\": container with ID starting with 7b743fca7332b1db208173adf6d40c1f23912ed3f6972873470fdc3a76ba5431 not found: ID does not exist" Feb 16 22:52:56 crc kubenswrapper[4792]: I0216 22:52:56.045571 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" path="/var/lib/kubelet/pods/e765ef87-0cfd-4132-b145-335b30c102e4/volumes" Feb 16 22:53:00 crc kubenswrapper[4792]: I0216 22:53:00.028786 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:53:00 crc kubenswrapper[4792]: E0216 22:53:00.029618 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:53:02 crc kubenswrapper[4792]: E0216 22:53:02.028461 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:53:03 crc kubenswrapper[4792]: E0216 22:53:03.038781 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:53:11 crc kubenswrapper[4792]: I0216 22:53:11.728236 4792 trace.go:236] Trace[1871202026]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (16-Feb-2026 22:53:10.689) (total time: 1038ms): Feb 16 22:53:11 crc kubenswrapper[4792]: Trace[1871202026]: [1.038503296s] [1.038503296s] END Feb 16 22:53:12 crc kubenswrapper[4792]: I0216 22:53:12.027513 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:53:12 crc kubenswrapper[4792]: E0216 22:53:12.028103 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:53:14 crc kubenswrapper[4792]: E0216 22:53:14.031584 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:53:15 crc kubenswrapper[4792]: E0216 22:53:15.029984 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:53:25 crc kubenswrapper[4792]: I0216 22:53:25.027841 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:53:25 crc kubenswrapper[4792]: E0216 22:53:25.028869 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:53:26 crc kubenswrapper[4792]: E0216 22:53:26.028593 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:53:28 crc kubenswrapper[4792]: E0216 22:53:28.048815 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:53:39 crc kubenswrapper[4792]: I0216 22:53:39.027383 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:53:39 crc kubenswrapper[4792]: E0216 22:53:39.028082 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:53:40 crc kubenswrapper[4792]: E0216 22:53:40.030390 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:53:41 crc kubenswrapper[4792]: E0216 22:53:41.028425 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:53:53 crc kubenswrapper[4792]: E0216 22:53:53.028072 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:53:54 crc kubenswrapper[4792]: I0216 22:53:54.028011 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:53:54 crc kubenswrapper[4792]: E0216 22:53:54.029446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:53:55 crc kubenswrapper[4792]: E0216 22:53:55.029361 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:54:06 crc kubenswrapper[4792]: E0216 22:54:06.029829 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:54:07 crc kubenswrapper[4792]: I0216 22:54:07.027697 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:54:07 crc kubenswrapper[4792]: E0216 22:54:07.028018 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:54:10 crc kubenswrapper[4792]: E0216 22:54:10.030434 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:54:20 crc kubenswrapper[4792]: I0216 22:54:20.026396 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:54:20 crc kubenswrapper[4792]: E0216 22:54:20.027148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 22:54:21 crc kubenswrapper[4792]: E0216 22:54:21.303391 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:54:25 crc kubenswrapper[4792]: E0216 22:54:25.029187 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:54:34 crc kubenswrapper[4792]: I0216 22:54:34.027934 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:54:34 crc kubenswrapper[4792]: E0216 22:54:34.029473 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:54:35 crc kubenswrapper[4792]: I0216 22:54:35.162303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de"} Feb 16 22:54:40 crc kubenswrapper[4792]: E0216 22:54:40.029588 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:54:45 crc kubenswrapper[4792]: I0216 22:54:45.223082 4792 trace.go:236] Trace[1998748228]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (16-Feb-2026 22:54:44.094) (total time: 1129ms): Feb 16 22:54:45 crc kubenswrapper[4792]: Trace[1998748228]: [1.129028169s] [1.129028169s] END Feb 16 22:54:46 crc kubenswrapper[4792]: I0216 22:54:46.032843 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:54:46 crc kubenswrapper[4792]: E0216 22:54:46.128477 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:54:46 crc kubenswrapper[4792]: E0216 22:54:46.128577 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:54:46 crc kubenswrapper[4792]: E0216 22:54:46.128803 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:54:46 crc kubenswrapper[4792]: E0216 22:54:46.130491 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:54:54 crc kubenswrapper[4792]: E0216 22:54:54.157957 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:54:54 crc kubenswrapper[4792]: E0216 22:54:54.158554 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 22:54:54 crc kubenswrapper[4792]: E0216 22:54:54.158787 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:54:54 crc kubenswrapper[4792]: E0216 22:54:54.159950 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:54:57 crc kubenswrapper[4792]: E0216 22:54:57.027942 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.759238 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:05 crc kubenswrapper[4792]: E0216 22:55:05.760358 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="extract-content" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.760374 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="extract-content" Feb 16 22:55:05 crc kubenswrapper[4792]: E0216 22:55:05.760405 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.760411 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" Feb 16 22:55:05 crc kubenswrapper[4792]: E0216 22:55:05.760422 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="extract-utilities" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.760428 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="extract-utilities" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.760671 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e765ef87-0cfd-4132-b145-335b30c102e4" containerName="registry-server" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.762611 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.771456 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.877815 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.877911 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmdjw\" (UniqueName: \"kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.877972 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.982009 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.982111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmdjw\" (UniqueName: \"kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.982177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.982589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:05 crc kubenswrapper[4792]: I0216 22:55:05.982634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:06 crc kubenswrapper[4792]: I0216 22:55:06.008798 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmdjw\" (UniqueName: \"kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw\") pod \"certified-operators-q98v6\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:06 crc kubenswrapper[4792]: I0216 22:55:06.097130 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:06 crc kubenswrapper[4792]: I0216 22:55:06.642351 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:07 crc kubenswrapper[4792]: I0216 22:55:07.538915 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ced41b2-4b50-4681-ac18-15e23e756991" containerID="3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4" exitCode=0 Feb 16 22:55:07 crc kubenswrapper[4792]: I0216 22:55:07.538988 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerDied","Data":"3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4"} Feb 16 22:55:07 crc kubenswrapper[4792]: I0216 22:55:07.539326 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerStarted","Data":"7a6b688fd4d44fe92c2ef3b07ccfc0a10fe8443059d176a187c76cdefdaf327c"} Feb 16 22:55:09 crc kubenswrapper[4792]: E0216 22:55:09.029945 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:55:09 crc kubenswrapper[4792]: E0216 22:55:09.030656 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:55:09 crc kubenswrapper[4792]: I0216 22:55:09.565509 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerStarted","Data":"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee"} Feb 16 22:55:13 crc kubenswrapper[4792]: I0216 22:55:13.630589 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ced41b2-4b50-4681-ac18-15e23e756991" containerID="c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee" exitCode=0 Feb 16 22:55:13 crc kubenswrapper[4792]: I0216 22:55:13.630732 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerDied","Data":"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee"} Feb 16 22:55:14 crc kubenswrapper[4792]: I0216 22:55:14.651696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerStarted","Data":"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4"} Feb 16 22:55:14 crc kubenswrapper[4792]: I0216 22:55:14.682761 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q98v6" podStartSLOduration=3.127013219 podStartE2EDuration="9.68274s" podCreationTimestamp="2026-02-16 22:55:05 +0000 UTC" firstStartedPulling="2026-02-16 22:55:07.544066457 +0000 UTC m=+4640.197345348" lastFinishedPulling="2026-02-16 22:55:14.099793228 +0000 UTC m=+4646.753072129" observedRunningTime="2026-02-16 22:55:14.680258042 +0000 UTC m=+4647.333536973" watchObservedRunningTime="2026-02-16 22:55:14.68274 +0000 UTC m=+4647.336018901" Feb 16 22:55:16 crc kubenswrapper[4792]: I0216 22:55:16.097703 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:16 crc kubenswrapper[4792]: I0216 22:55:16.098721 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:17 crc kubenswrapper[4792]: I0216 22:55:17.165064 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q98v6" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="registry-server" probeResult="failure" output=< Feb 16 22:55:17 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 22:55:17 crc kubenswrapper[4792]: > Feb 16 22:55:21 crc kubenswrapper[4792]: E0216 22:55:21.030465 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:55:24 crc kubenswrapper[4792]: E0216 22:55:24.029741 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:55:26 crc kubenswrapper[4792]: I0216 22:55:26.396160 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:26 crc kubenswrapper[4792]: I0216 22:55:26.461745 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.038643 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.039382 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q98v6" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="registry-server" containerID="cri-o://66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4" gracePeriod=2 Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.569047 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.658606 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmdjw\" (UniqueName: \"kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw\") pod \"6ced41b2-4b50-4681-ac18-15e23e756991\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.658790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content\") pod \"6ced41b2-4b50-4681-ac18-15e23e756991\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.658904 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities\") pod \"6ced41b2-4b50-4681-ac18-15e23e756991\" (UID: \"6ced41b2-4b50-4681-ac18-15e23e756991\") " Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.662992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities" (OuterVolumeSpecName: "utilities") pod "6ced41b2-4b50-4681-ac18-15e23e756991" (UID: "6ced41b2-4b50-4681-ac18-15e23e756991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.667235 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw" (OuterVolumeSpecName: "kube-api-access-dmdjw") pod "6ced41b2-4b50-4681-ac18-15e23e756991" (UID: "6ced41b2-4b50-4681-ac18-15e23e756991"). InnerVolumeSpecName "kube-api-access-dmdjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.710534 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ced41b2-4b50-4681-ac18-15e23e756991" (UID: "6ced41b2-4b50-4681-ac18-15e23e756991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.761541 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.761581 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced41b2-4b50-4681-ac18-15e23e756991-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.761607 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmdjw\" (UniqueName: \"kubernetes.io/projected/6ced41b2-4b50-4681-ac18-15e23e756991-kube-api-access-dmdjw\") on node \"crc\" DevicePath \"\"" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.842273 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ced41b2-4b50-4681-ac18-15e23e756991" containerID="66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4" exitCode=0 Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.842374 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q98v6" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.842368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerDied","Data":"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4"} Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.842819 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q98v6" event={"ID":"6ced41b2-4b50-4681-ac18-15e23e756991","Type":"ContainerDied","Data":"7a6b688fd4d44fe92c2ef3b07ccfc0a10fe8443059d176a187c76cdefdaf327c"} Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.842856 4792 scope.go:117] "RemoveContainer" containerID="66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.873543 4792 scope.go:117] "RemoveContainer" containerID="c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.879794 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.892729 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q98v6"] Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.908641 4792 scope.go:117] "RemoveContainer" containerID="3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.952999 4792 scope.go:117] "RemoveContainer" containerID="66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4" Feb 16 22:55:30 crc kubenswrapper[4792]: E0216 22:55:30.953456 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4\": container with ID starting with 66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4 not found: ID does not exist" containerID="66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.953495 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4"} err="failed to get container status \"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4\": rpc error: code = NotFound desc = could not find container \"66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4\": container with ID starting with 66cc9e621d2a3ca984b7efe4e8d0326faabed0342f1398abbe4876a317fcbdb4 not found: ID does not exist" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.953521 4792 scope.go:117] "RemoveContainer" containerID="c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee" Feb 16 22:55:30 crc kubenswrapper[4792]: E0216 22:55:30.953790 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee\": container with ID starting with c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee not found: ID does not exist" containerID="c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.953819 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee"} err="failed to get container status \"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee\": rpc error: code = NotFound desc = could not find container \"c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee\": container with ID starting with c535ba5b27af8c6fe52d4daf8bc7a5fd251a800d8c7cbdf5ab5eaad17b2732ee not found: ID does not exist" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.953840 4792 scope.go:117] "RemoveContainer" containerID="3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4" Feb 16 22:55:30 crc kubenswrapper[4792]: E0216 22:55:30.954079 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4\": container with ID starting with 3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4 not found: ID does not exist" containerID="3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4" Feb 16 22:55:30 crc kubenswrapper[4792]: I0216 22:55:30.954105 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4"} err="failed to get container status \"3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4\": rpc error: code = NotFound desc = could not find container \"3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4\": container with ID starting with 3ea1ff4e69810a1de621e36abc7f29e8a34a715a32c20e5d30f2a698589fbad4 not found: ID does not exist" Feb 16 22:55:32 crc kubenswrapper[4792]: I0216 22:55:32.082030 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" path="/var/lib/kubelet/pods/6ced41b2-4b50-4681-ac18-15e23e756991/volumes" Feb 16 22:55:36 crc kubenswrapper[4792]: E0216 22:55:36.030445 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:55:39 crc kubenswrapper[4792]: E0216 22:55:39.028334 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:55:48 crc kubenswrapper[4792]: E0216 22:55:48.056041 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.040114 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg"] Feb 16 22:55:52 crc kubenswrapper[4792]: E0216 22:55:52.041112 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="extract-utilities" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.041148 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="extract-utilities" Feb 16 22:55:52 crc kubenswrapper[4792]: E0216 22:55:52.041179 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="extract-content" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.041185 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="extract-content" Feb 16 22:55:52 crc kubenswrapper[4792]: E0216 22:55:52.041236 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="registry-server" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.041243 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="registry-server" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.041527 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ced41b2-4b50-4681-ac18-15e23e756991" containerName="registry-server" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.042563 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.046128 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.046135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ldhl8" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.046229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.052272 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.052695 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg"] Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.135616 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrsw\" (UniqueName: \"kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.135737 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.135779 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.237991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.238064 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.238226 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrsw\" (UniqueName: \"kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.254308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.255027 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.257109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrsw\" (UniqueName: \"kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.362614 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 22:55:52 crc kubenswrapper[4792]: W0216 22:55:52.961892 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a8f572_f295_493c_aad8_417b6ca06b03.slice/crio-5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc WatchSource:0}: Error finding container 5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc: Status 404 returned error can't find the container with id 5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc Feb 16 22:55:52 crc kubenswrapper[4792]: I0216 22:55:52.971314 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg"] Feb 16 22:55:53 crc kubenswrapper[4792]: I0216 22:55:53.109475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" event={"ID":"01a8f572-f295-493c-aad8-417b6ca06b03","Type":"ContainerStarted","Data":"5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc"} Feb 16 22:55:54 crc kubenswrapper[4792]: E0216 22:55:54.029076 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:55:54 crc kubenswrapper[4792]: I0216 22:55:54.123273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" event={"ID":"01a8f572-f295-493c-aad8-417b6ca06b03","Type":"ContainerStarted","Data":"69584bfacee16aef20cfc720ed5ef25dff6673b699a755e395370ce4621ba82e"} Feb 16 22:55:54 crc kubenswrapper[4792]: I0216 22:55:54.146516 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" podStartSLOduration=1.390881245 podStartE2EDuration="2.146495956s" podCreationTimestamp="2026-02-16 22:55:52 +0000 UTC" firstStartedPulling="2026-02-16 22:55:52.964965801 +0000 UTC m=+4685.618244692" lastFinishedPulling="2026-02-16 22:55:53.720580472 +0000 UTC m=+4686.373859403" observedRunningTime="2026-02-16 22:55:54.144123383 +0000 UTC m=+4686.797402304" watchObservedRunningTime="2026-02-16 22:55:54.146495956 +0000 UTC m=+4686.799774857" Feb 16 22:55:59 crc kubenswrapper[4792]: E0216 22:55:59.028782 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:56:07 crc kubenswrapper[4792]: E0216 22:56:07.028674 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:56:11 crc kubenswrapper[4792]: E0216 22:56:11.029433 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:56:20 crc kubenswrapper[4792]: E0216 22:56:20.030204 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:56:26 crc kubenswrapper[4792]: E0216 22:56:26.030910 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:56:34 crc kubenswrapper[4792]: E0216 22:56:34.028818 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:56:39 crc kubenswrapper[4792]: E0216 22:56:39.029515 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:56:46 crc kubenswrapper[4792]: E0216 22:56:46.029895 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:56:53 crc kubenswrapper[4792]: E0216 22:56:53.028437 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:01 crc kubenswrapper[4792]: E0216 22:57:01.029215 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:57:01 crc kubenswrapper[4792]: I0216 22:57:01.532555 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:57:01 crc kubenswrapper[4792]: I0216 22:57:01.532654 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:57:04 crc kubenswrapper[4792]: E0216 22:57:04.030220 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:09 crc kubenswrapper[4792]: I0216 22:57:09.478573 4792 trace.go:236] Trace[1282320175]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (16-Feb-2026 22:57:07.904) (total time: 1574ms): Feb 16 22:57:09 crc kubenswrapper[4792]: Trace[1282320175]: [1.574143291s] [1.574143291s] END Feb 16 22:57:15 crc kubenswrapper[4792]: E0216 22:57:15.030174 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:15 crc kubenswrapper[4792]: E0216 22:57:15.030226 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:57:28 crc kubenswrapper[4792]: E0216 22:57:28.038846 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:29 crc kubenswrapper[4792]: E0216 22:57:29.028931 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:57:31 crc kubenswrapper[4792]: I0216 22:57:31.532448 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:57:31 crc kubenswrapper[4792]: I0216 22:57:31.533000 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:57:39 crc kubenswrapper[4792]: E0216 22:57:39.029262 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:43 crc kubenswrapper[4792]: E0216 22:57:43.029012 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:57:54 crc kubenswrapper[4792]: E0216 22:57:54.028002 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:57:57 crc kubenswrapper[4792]: E0216 22:57:57.029529 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.532038 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.532761 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.532815 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.533900 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.533967 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de" gracePeriod=600 Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.784020 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de" exitCode=0 Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.784120 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de"} Feb 16 22:58:01 crc kubenswrapper[4792]: I0216 22:58:01.784289 4792 scope.go:117] "RemoveContainer" containerID="4ce94efc0bd8dcd980dd9b01488077051d54b491937112d8d34b37a38b41e6f8" Feb 16 22:58:02 crc kubenswrapper[4792]: I0216 22:58:02.798106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0"} Feb 16 22:58:09 crc kubenswrapper[4792]: E0216 22:58:09.032913 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:58:11 crc kubenswrapper[4792]: E0216 22:58:11.030766 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:58:24 crc kubenswrapper[4792]: E0216 22:58:24.031865 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:58:26 crc kubenswrapper[4792]: E0216 22:58:26.030750 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:58:37 crc kubenswrapper[4792]: E0216 22:58:37.029020 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:58:41 crc kubenswrapper[4792]: E0216 22:58:41.029270 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:58:49 crc kubenswrapper[4792]: E0216 22:58:49.029776 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:58:52 crc kubenswrapper[4792]: E0216 22:58:52.029343 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:59:00 crc kubenswrapper[4792]: E0216 22:59:00.030345 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:59:04 crc kubenswrapper[4792]: E0216 22:59:04.028506 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.100448 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.103414 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.122143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.122201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.124576 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmjs\" (UniqueName: \"kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.137294 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.226407 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.226449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.226518 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdmjs\" (UniqueName: \"kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.226998 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.227039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.246801 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdmjs\" (UniqueName: \"kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs\") pod \"community-operators-njjgp\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:05 crc kubenswrapper[4792]: I0216 22:59:05.443800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:06 crc kubenswrapper[4792]: I0216 22:59:06.001586 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:06 crc kubenswrapper[4792]: I0216 22:59:06.614096 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerID="cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee" exitCode=0 Feb 16 22:59:06 crc kubenswrapper[4792]: I0216 22:59:06.614135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerDied","Data":"cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee"} Feb 16 22:59:06 crc kubenswrapper[4792]: I0216 22:59:06.614159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerStarted","Data":"b58fbdb5c9236b9098f0b98cfbf08083df7d6cabaec5d21807980c1bae0710d3"} Feb 16 22:59:08 crc kubenswrapper[4792]: I0216 22:59:08.664956 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerStarted","Data":"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb"} Feb 16 22:59:09 crc kubenswrapper[4792]: I0216 22:59:09.678988 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerID="32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb" exitCode=0 Feb 16 22:59:09 crc kubenswrapper[4792]: I0216 22:59:09.679046 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerDied","Data":"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb"} Feb 16 22:59:10 crc kubenswrapper[4792]: I0216 22:59:10.695828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerStarted","Data":"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1"} Feb 16 22:59:10 crc kubenswrapper[4792]: I0216 22:59:10.718453 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-njjgp" podStartSLOduration=2.266875104 podStartE2EDuration="5.718428913s" podCreationTimestamp="2026-02-16 22:59:05 +0000 UTC" firstStartedPulling="2026-02-16 22:59:06.62788739 +0000 UTC m=+4879.281166291" lastFinishedPulling="2026-02-16 22:59:10.079441199 +0000 UTC m=+4882.732720100" observedRunningTime="2026-02-16 22:59:10.717338235 +0000 UTC m=+4883.370617136" watchObservedRunningTime="2026-02-16 22:59:10.718428913 +0000 UTC m=+4883.371707804" Feb 16 22:59:15 crc kubenswrapper[4792]: E0216 22:59:15.028695 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:59:15 crc kubenswrapper[4792]: I0216 22:59:15.444995 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:15 crc kubenswrapper[4792]: I0216 22:59:15.445253 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:15 crc kubenswrapper[4792]: I0216 22:59:15.515901 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:15 crc kubenswrapper[4792]: I0216 22:59:15.809802 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:15 crc kubenswrapper[4792]: I0216 22:59:15.868282 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:17 crc kubenswrapper[4792]: I0216 22:59:17.784704 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-njjgp" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="registry-server" containerID="cri-o://924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1" gracePeriod=2 Feb 16 22:59:18 crc kubenswrapper[4792]: E0216 22:59:18.038742 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.365848 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.483281 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content\") pod \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.483541 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities\") pod \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.483727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdmjs\" (UniqueName: \"kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs\") pod \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\" (UID: \"3dc98708-8d19-43d8-ac2c-daa89e723c1b\") " Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.484373 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities" (OuterVolumeSpecName: "utilities") pod "3dc98708-8d19-43d8-ac2c-daa89e723c1b" (UID: "3dc98708-8d19-43d8-ac2c-daa89e723c1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.484495 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.493333 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs" (OuterVolumeSpecName: "kube-api-access-rdmjs") pod "3dc98708-8d19-43d8-ac2c-daa89e723c1b" (UID: "3dc98708-8d19-43d8-ac2c-daa89e723c1b"). InnerVolumeSpecName "kube-api-access-rdmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.541959 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dc98708-8d19-43d8-ac2c-daa89e723c1b" (UID: "3dc98708-8d19-43d8-ac2c-daa89e723c1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.586864 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdmjs\" (UniqueName: \"kubernetes.io/projected/3dc98708-8d19-43d8-ac2c-daa89e723c1b-kube-api-access-rdmjs\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.586902 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc98708-8d19-43d8-ac2c-daa89e723c1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.802751 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerID="924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1" exitCode=0 Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.802802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerDied","Data":"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1"} Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.802833 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njjgp" event={"ID":"3dc98708-8d19-43d8-ac2c-daa89e723c1b","Type":"ContainerDied","Data":"b58fbdb5c9236b9098f0b98cfbf08083df7d6cabaec5d21807980c1bae0710d3"} Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.802857 4792 scope.go:117] "RemoveContainer" containerID="924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.802864 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njjgp" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.874107 4792 scope.go:117] "RemoveContainer" containerID="32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.884661 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.898498 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-njjgp"] Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.926254 4792 scope.go:117] "RemoveContainer" containerID="cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.978491 4792 scope.go:117] "RemoveContainer" containerID="924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1" Feb 16 22:59:18 crc kubenswrapper[4792]: E0216 22:59:18.993842 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1\": container with ID starting with 924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1 not found: ID does not exist" containerID="924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.993900 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1"} err="failed to get container status \"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1\": rpc error: code = NotFound desc = could not find container \"924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1\": container with ID starting with 924c8fae5af36b369ba40462389d6001bf5bcfae0e84909519e4a3f704aa90c1 not found: ID does not exist" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.993945 4792 scope.go:117] "RemoveContainer" containerID="32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb" Feb 16 22:59:18 crc kubenswrapper[4792]: E0216 22:59:18.996059 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb\": container with ID starting with 32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb not found: ID does not exist" containerID="32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.996103 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb"} err="failed to get container status \"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb\": rpc error: code = NotFound desc = could not find container \"32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb\": container with ID starting with 32d08cfe734b7c5d224db84af20013048b1cb5c8e271ef5ccb9c4f95e600eedb not found: ID does not exist" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.996134 4792 scope.go:117] "RemoveContainer" containerID="cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee" Feb 16 22:59:18 crc kubenswrapper[4792]: E0216 22:59:18.996424 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee\": container with ID starting with cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee not found: ID does not exist" containerID="cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee" Feb 16 22:59:18 crc kubenswrapper[4792]: I0216 22:59:18.996484 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee"} err="failed to get container status \"cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee\": rpc error: code = NotFound desc = could not find container \"cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee\": container with ID starting with cb12986eab61913dc5451d18068780608117a3ff6bfac3ab7415de2056572aee not found: ID does not exist" Feb 16 22:59:20 crc kubenswrapper[4792]: I0216 22:59:20.045673 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" path="/var/lib/kubelet/pods/3dc98708-8d19-43d8-ac2c-daa89e723c1b/volumes" Feb 16 22:59:27 crc kubenswrapper[4792]: E0216 22:59:27.030856 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:59:31 crc kubenswrapper[4792]: E0216 22:59:31.028415 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:59:38 crc kubenswrapper[4792]: E0216 22:59:38.038036 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:59:46 crc kubenswrapper[4792]: E0216 22:59:46.030330 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 22:59:51 crc kubenswrapper[4792]: E0216 22:59:51.028972 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 22:59:57 crc kubenswrapper[4792]: I0216 22:59:57.028977 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:59:57 crc kubenswrapper[4792]: E0216 22:59:57.152370 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:59:57 crc kubenswrapper[4792]: E0216 22:59:57.152451 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 22:59:57 crc kubenswrapper[4792]: E0216 22:59:57.152658 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 22:59:57 crc kubenswrapper[4792]: E0216 22:59:57.154545 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.176523 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj"] Feb 16 23:00:00 crc kubenswrapper[4792]: E0216 23:00:00.177936 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="extract-utilities" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.177972 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="extract-utilities" Feb 16 23:00:00 crc kubenswrapper[4792]: E0216 23:00:00.178040 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.178053 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4792]: E0216 23:00:00.178107 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="extract-content" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.178127 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="extract-content" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.178826 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc98708-8d19-43d8-ac2c-daa89e723c1b" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.180419 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.182768 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.184870 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.189295 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj"] Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.340502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.340746 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6blhv\" (UniqueName: \"kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.340882 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.444068 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6blhv\" (UniqueName: \"kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.444315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.444456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.445366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.451579 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.470224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6blhv\" (UniqueName: \"kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv\") pod \"collect-profiles-29521380-bnsrj\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:00 crc kubenswrapper[4792]: I0216 23:00:00.509696 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.054350 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj"] Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.341028 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" event={"ID":"3bb77d56-0e53-4a16-9511-fa8b0a780ba7","Type":"ContainerStarted","Data":"e29ab807cab447af383f0f8316794883bc0d0f7797187d2ff19e8a188c599233"} Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.341349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" event={"ID":"3bb77d56-0e53-4a16-9511-fa8b0a780ba7","Type":"ContainerStarted","Data":"6ed6176edbefe347b746dd9d290e779cbe6aa04a82d9a88c420156a2340c178f"} Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.362794 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" podStartSLOduration=1.3627774750000001 podStartE2EDuration="1.362777475s" podCreationTimestamp="2026-02-16 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:00:01.354159243 +0000 UTC m=+4934.007438144" watchObservedRunningTime="2026-02-16 23:00:01.362777475 +0000 UTC m=+4934.016056356" Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.532473 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:00:01 crc kubenswrapper[4792]: I0216 23:00:01.532540 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:00:02 crc kubenswrapper[4792]: I0216 23:00:02.357100 4792 generic.go:334] "Generic (PLEG): container finished" podID="3bb77d56-0e53-4a16-9511-fa8b0a780ba7" containerID="e29ab807cab447af383f0f8316794883bc0d0f7797187d2ff19e8a188c599233" exitCode=0 Feb 16 23:00:02 crc kubenswrapper[4792]: I0216 23:00:02.357178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" event={"ID":"3bb77d56-0e53-4a16-9511-fa8b0a780ba7","Type":"ContainerDied","Data":"e29ab807cab447af383f0f8316794883bc0d0f7797187d2ff19e8a188c599233"} Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.270159 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.348686 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6blhv\" (UniqueName: \"kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv\") pod \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.348790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume\") pod \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.348832 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume\") pod \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\" (UID: \"3bb77d56-0e53-4a16-9511-fa8b0a780ba7\") " Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.349984 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bb77d56-0e53-4a16-9511-fa8b0a780ba7" (UID: "3bb77d56-0e53-4a16-9511-fa8b0a780ba7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.358736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3bb77d56-0e53-4a16-9511-fa8b0a780ba7" (UID: "3bb77d56-0e53-4a16-9511-fa8b0a780ba7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.358911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv" (OuterVolumeSpecName: "kube-api-access-6blhv") pod "3bb77d56-0e53-4a16-9511-fa8b0a780ba7" (UID: "3bb77d56-0e53-4a16-9511-fa8b0a780ba7"). InnerVolumeSpecName "kube-api-access-6blhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.381740 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" event={"ID":"3bb77d56-0e53-4a16-9511-fa8b0a780ba7","Type":"ContainerDied","Data":"6ed6176edbefe347b746dd9d290e779cbe6aa04a82d9a88c420156a2340c178f"} Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.381790 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed6176edbefe347b746dd9d290e779cbe6aa04a82d9a88c420156a2340c178f" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.381808 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-bnsrj" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.452004 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6blhv\" (UniqueName: \"kubernetes.io/projected/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-kube-api-access-6blhv\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.452060 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.452078 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb77d56-0e53-4a16-9511-fa8b0a780ba7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.459388 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt"] Feb 16 23:00:04 crc kubenswrapper[4792]: I0216 23:00:04.483799 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-fb8pt"] Feb 16 23:00:05 crc kubenswrapper[4792]: E0216 23:00:05.135951 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:00:05 crc kubenswrapper[4792]: E0216 23:00:05.136396 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:00:05 crc kubenswrapper[4792]: E0216 23:00:05.136619 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 23:00:05 crc kubenswrapper[4792]: E0216 23:00:05.137921 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:00:06 crc kubenswrapper[4792]: I0216 23:00:06.043916 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ce140f-735e-4460-a10b-4d383cbf8fbf" path="/var/lib/kubelet/pods/42ce140f-735e-4460-a10b-4d383cbf8fbf/volumes" Feb 16 23:00:11 crc kubenswrapper[4792]: E0216 23:00:11.028762 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:00:18 crc kubenswrapper[4792]: E0216 23:00:18.038871 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:00:22 crc kubenswrapper[4792]: E0216 23:00:22.030344 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:00:31 crc kubenswrapper[4792]: I0216 23:00:31.532792 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:00:31 crc kubenswrapper[4792]: I0216 23:00:31.533505 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:00:33 crc kubenswrapper[4792]: E0216 23:00:33.028032 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:00:33 crc kubenswrapper[4792]: E0216 23:00:33.035182 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:00:46 crc kubenswrapper[4792]: E0216 23:00:46.039304 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:00:48 crc kubenswrapper[4792]: E0216 23:00:48.036586 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:00:57 crc kubenswrapper[4792]: E0216 23:00:57.031221 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.173461 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521381-km4sq"] Feb 16 23:01:00 crc kubenswrapper[4792]: E0216 23:01:00.174843 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb77d56-0e53-4a16-9511-fa8b0a780ba7" containerName="collect-profiles" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.174868 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb77d56-0e53-4a16-9511-fa8b0a780ba7" containerName="collect-profiles" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.175319 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb77d56-0e53-4a16-9511-fa8b0a780ba7" containerName="collect-profiles" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.176504 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.190477 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521381-km4sq"] Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.260815 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.261190 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.261428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7qr7\" (UniqueName: \"kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.261697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.364462 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.364568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.364769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7qr7\" (UniqueName: \"kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.364911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.374047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.374940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.383723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.400122 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7qr7\" (UniqueName: \"kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7\") pod \"keystone-cron-29521381-km4sq\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:00 crc kubenswrapper[4792]: I0216 23:01:00.509623 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.063703 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521381-km4sq"] Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.094548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521381-km4sq" event={"ID":"472704bc-8a94-4472-aaf0-b7527cfeb102","Type":"ContainerStarted","Data":"b32f6c2b7fef1b3fec884528894d7893b03f96cf6bd02d36e13da19d2fbc925b"} Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.532338 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.532687 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.532727 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.533573 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:01:01 crc kubenswrapper[4792]: I0216 23:01:01.533655 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" gracePeriod=600 Feb 16 23:01:01 crc kubenswrapper[4792]: E0216 23:01:01.656882 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.112566 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" exitCode=0 Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.112696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0"} Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.112743 4792 scope.go:117] "RemoveContainer" containerID="73ee31dd31c26af850ce2f8aaccacb569c6260a43fad2ecc9aa69ac8fff432de" Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.115020 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:01:02 crc kubenswrapper[4792]: E0216 23:01:02.115749 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.117064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521381-km4sq" event={"ID":"472704bc-8a94-4472-aaf0-b7527cfeb102","Type":"ContainerStarted","Data":"a511f409a3eed1fe593dc08a6567eb8313d463219c8b31d375da921472ed02f8"} Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.182977 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521381-km4sq" podStartSLOduration=2.182957648 podStartE2EDuration="2.182957648s" podCreationTimestamp="2026-02-16 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:01:02.172738132 +0000 UTC m=+4994.826017033" watchObservedRunningTime="2026-02-16 23:01:02.182957648 +0000 UTC m=+4994.836236539" Feb 16 23:01:02 crc kubenswrapper[4792]: I0216 23:01:02.320076 4792 scope.go:117] "RemoveContainer" containerID="81a6f328498bd9a8b48935cd4774a8c89d4cf90ac2946b665aa3bd46c7e71885" Feb 16 23:01:03 crc kubenswrapper[4792]: E0216 23:01:03.027486 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:01:05 crc kubenswrapper[4792]: I0216 23:01:05.161682 4792 generic.go:334] "Generic (PLEG): container finished" podID="472704bc-8a94-4472-aaf0-b7527cfeb102" containerID="a511f409a3eed1fe593dc08a6567eb8313d463219c8b31d375da921472ed02f8" exitCode=0 Feb 16 23:01:05 crc kubenswrapper[4792]: I0216 23:01:05.161760 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521381-km4sq" event={"ID":"472704bc-8a94-4472-aaf0-b7527cfeb102","Type":"ContainerDied","Data":"a511f409a3eed1fe593dc08a6567eb8313d463219c8b31d375da921472ed02f8"} Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.630441 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.735061 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys\") pod \"472704bc-8a94-4472-aaf0-b7527cfeb102\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.735167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data\") pod \"472704bc-8a94-4472-aaf0-b7527cfeb102\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.735217 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7qr7\" (UniqueName: \"kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7\") pod \"472704bc-8a94-4472-aaf0-b7527cfeb102\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.735315 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle\") pod \"472704bc-8a94-4472-aaf0-b7527cfeb102\" (UID: \"472704bc-8a94-4472-aaf0-b7527cfeb102\") " Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.742439 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7" (OuterVolumeSpecName: "kube-api-access-f7qr7") pod "472704bc-8a94-4472-aaf0-b7527cfeb102" (UID: "472704bc-8a94-4472-aaf0-b7527cfeb102"). InnerVolumeSpecName "kube-api-access-f7qr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.742567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "472704bc-8a94-4472-aaf0-b7527cfeb102" (UID: "472704bc-8a94-4472-aaf0-b7527cfeb102"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.768423 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "472704bc-8a94-4472-aaf0-b7527cfeb102" (UID: "472704bc-8a94-4472-aaf0-b7527cfeb102"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.808439 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data" (OuterVolumeSpecName: "config-data") pod "472704bc-8a94-4472-aaf0-b7527cfeb102" (UID: "472704bc-8a94-4472-aaf0-b7527cfeb102"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.837840 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.837878 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.837887 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/472704bc-8a94-4472-aaf0-b7527cfeb102-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:06 crc kubenswrapper[4792]: I0216 23:01:06.837897 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7qr7\" (UniqueName: \"kubernetes.io/projected/472704bc-8a94-4472-aaf0-b7527cfeb102-kube-api-access-f7qr7\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:07 crc kubenswrapper[4792]: I0216 23:01:07.191078 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521381-km4sq" event={"ID":"472704bc-8a94-4472-aaf0-b7527cfeb102","Type":"ContainerDied","Data":"b32f6c2b7fef1b3fec884528894d7893b03f96cf6bd02d36e13da19d2fbc925b"} Feb 16 23:01:07 crc kubenswrapper[4792]: I0216 23:01:07.191137 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b32f6c2b7fef1b3fec884528894d7893b03f96cf6bd02d36e13da19d2fbc925b" Feb 16 23:01:07 crc kubenswrapper[4792]: I0216 23:01:07.191224 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521381-km4sq" Feb 16 23:01:12 crc kubenswrapper[4792]: E0216 23:01:12.028179 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:01:15 crc kubenswrapper[4792]: E0216 23:01:15.031043 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:01:17 crc kubenswrapper[4792]: I0216 23:01:17.027433 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:01:17 crc kubenswrapper[4792]: E0216 23:01:17.028429 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:23 crc kubenswrapper[4792]: E0216 23:01:23.029139 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:01:27 crc kubenswrapper[4792]: E0216 23:01:27.030680 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:01:30 crc kubenswrapper[4792]: I0216 23:01:30.027640 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:01:30 crc kubenswrapper[4792]: E0216 23:01:30.028757 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:35 crc kubenswrapper[4792]: E0216 23:01:35.030014 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:01:40 crc kubenswrapper[4792]: E0216 23:01:40.028991 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:01:42 crc kubenswrapper[4792]: I0216 23:01:42.026545 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:01:42 crc kubenswrapper[4792]: E0216 23:01:42.027702 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:47 crc kubenswrapper[4792]: E0216 23:01:47.029913 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:01:53 crc kubenswrapper[4792]: I0216 23:01:53.027251 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:01:53 crc kubenswrapper[4792]: E0216 23:01:53.027996 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:01:55 crc kubenswrapper[4792]: E0216 23:01:55.028584 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:01:59 crc kubenswrapper[4792]: E0216 23:01:59.028715 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:02:05 crc kubenswrapper[4792]: I0216 23:02:05.026978 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:02:05 crc kubenswrapper[4792]: E0216 23:02:05.027885 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:02:10 crc kubenswrapper[4792]: E0216 23:02:10.029682 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:02:14 crc kubenswrapper[4792]: I0216 23:02:14.016674 4792 generic.go:334] "Generic (PLEG): container finished" podID="01a8f572-f295-493c-aad8-417b6ca06b03" containerID="69584bfacee16aef20cfc720ed5ef25dff6673b699a755e395370ce4621ba82e" exitCode=2 Feb 16 23:02:14 crc kubenswrapper[4792]: I0216 23:02:14.016718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" event={"ID":"01a8f572-f295-493c-aad8-417b6ca06b03","Type":"ContainerDied","Data":"69584bfacee16aef20cfc720ed5ef25dff6673b699a755e395370ce4621ba82e"} Feb 16 23:02:14 crc kubenswrapper[4792]: E0216 23:02:14.031005 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.544511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.631006 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam\") pod \"01a8f572-f295-493c-aad8-417b6ca06b03\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.631720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory\") pod \"01a8f572-f295-493c-aad8-417b6ca06b03\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.631896 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrsw\" (UniqueName: \"kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw\") pod \"01a8f572-f295-493c-aad8-417b6ca06b03\" (UID: \"01a8f572-f295-493c-aad8-417b6ca06b03\") " Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.642090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw" (OuterVolumeSpecName: "kube-api-access-znrsw") pod "01a8f572-f295-493c-aad8-417b6ca06b03" (UID: "01a8f572-f295-493c-aad8-417b6ca06b03"). InnerVolumeSpecName "kube-api-access-znrsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.680893 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory" (OuterVolumeSpecName: "inventory") pod "01a8f572-f295-493c-aad8-417b6ca06b03" (UID: "01a8f572-f295-493c-aad8-417b6ca06b03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.701993 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01a8f572-f295-493c-aad8-417b6ca06b03" (UID: "01a8f572-f295-493c-aad8-417b6ca06b03"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.736782 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.736839 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01a8f572-f295-493c-aad8-417b6ca06b03-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:15 crc kubenswrapper[4792]: I0216 23:02:15.736933 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrsw\" (UniqueName: \"kubernetes.io/projected/01a8f572-f295-493c-aad8-417b6ca06b03-kube-api-access-znrsw\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:16 crc kubenswrapper[4792]: I0216 23:02:16.043648 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" Feb 16 23:02:16 crc kubenswrapper[4792]: I0216 23:02:16.043579 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg" event={"ID":"01a8f572-f295-493c-aad8-417b6ca06b03","Type":"ContainerDied","Data":"5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc"} Feb 16 23:02:16 crc kubenswrapper[4792]: I0216 23:02:16.044040 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc" Feb 16 23:02:16 crc kubenswrapper[4792]: E0216 23:02:16.277152 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a8f572_f295_493c_aad8_417b6ca06b03.slice/crio-5d406f885583cbed297a6879a3b8744f0f3cf259b6b2518178a8d7862b38aefc\": RecentStats: unable to find data in memory cache]" Feb 16 23:02:18 crc kubenswrapper[4792]: I0216 23:02:18.033779 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:02:18 crc kubenswrapper[4792]: E0216 23:02:18.034138 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:02:21 crc kubenswrapper[4792]: E0216 23:02:21.030175 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.873010 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:02:24 crc kubenswrapper[4792]: E0216 23:02:24.874056 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a8f572-f295-493c-aad8-417b6ca06b03" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.874074 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a8f572-f295-493c-aad8-417b6ca06b03" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:02:24 crc kubenswrapper[4792]: E0216 23:02:24.874117 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472704bc-8a94-4472-aaf0-b7527cfeb102" containerName="keystone-cron" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.874127 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="472704bc-8a94-4472-aaf0-b7527cfeb102" containerName="keystone-cron" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.874401 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="472704bc-8a94-4472-aaf0-b7527cfeb102" containerName="keystone-cron" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.874433 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a8f572-f295-493c-aad8-417b6ca06b03" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.876508 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:24 crc kubenswrapper[4792]: I0216 23:02:24.885452 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.002133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.002350 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.002513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zf6z\" (UniqueName: \"kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.105500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.105687 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.105791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zf6z\" (UniqueName: \"kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.106835 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.106863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.125227 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zf6z\" (UniqueName: \"kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z\") pod \"redhat-operators-qsjfm\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.245285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:25 crc kubenswrapper[4792]: I0216 23:02:25.673463 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:02:26 crc kubenswrapper[4792]: I0216 23:02:26.158086 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerID="c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6" exitCode=0 Feb 16 23:02:26 crc kubenswrapper[4792]: I0216 23:02:26.158130 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerDied","Data":"c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6"} Feb 16 23:02:26 crc kubenswrapper[4792]: I0216 23:02:26.158154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerStarted","Data":"32513b8038d1dc532efc89e229de159c79f6e67ce03c3e0c430856699ea36a57"} Feb 16 23:02:27 crc kubenswrapper[4792]: I0216 23:02:27.177834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerStarted","Data":"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852"} Feb 16 23:02:28 crc kubenswrapper[4792]: E0216 23:02:28.036534 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:02:29 crc kubenswrapper[4792]: I0216 23:02:29.027349 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:02:29 crc kubenswrapper[4792]: E0216 23:02:29.028112 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:02:32 crc kubenswrapper[4792]: E0216 23:02:32.029890 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:02:35 crc kubenswrapper[4792]: I0216 23:02:35.290916 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerID="9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852" exitCode=0 Feb 16 23:02:35 crc kubenswrapper[4792]: I0216 23:02:35.291104 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerDied","Data":"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852"} Feb 16 23:02:36 crc kubenswrapper[4792]: I0216 23:02:36.306218 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerStarted","Data":"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d"} Feb 16 23:02:36 crc kubenswrapper[4792]: I0216 23:02:36.325783 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsjfm" podStartSLOduration=2.789783169 podStartE2EDuration="12.325755938s" podCreationTimestamp="2026-02-16 23:02:24 +0000 UTC" firstStartedPulling="2026-02-16 23:02:26.159621505 +0000 UTC m=+5078.812900396" lastFinishedPulling="2026-02-16 23:02:35.695594274 +0000 UTC m=+5088.348873165" observedRunningTime="2026-02-16 23:02:36.325045679 +0000 UTC m=+5088.978324560" watchObservedRunningTime="2026-02-16 23:02:36.325755938 +0000 UTC m=+5088.979034819" Feb 16 23:02:40 crc kubenswrapper[4792]: E0216 23:02:40.043340 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:02:41 crc kubenswrapper[4792]: I0216 23:02:41.028050 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:02:41 crc kubenswrapper[4792]: E0216 23:02:41.028511 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:02:44 crc kubenswrapper[4792]: E0216 23:02:44.028523 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:02:45 crc kubenswrapper[4792]: I0216 23:02:45.246306 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:45 crc kubenswrapper[4792]: I0216 23:02:45.246549 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:02:46 crc kubenswrapper[4792]: I0216 23:02:46.303224 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsjfm" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" probeResult="failure" output=< Feb 16 23:02:46 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 23:02:46 crc kubenswrapper[4792]: > Feb 16 23:02:55 crc kubenswrapper[4792]: I0216 23:02:55.026155 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:02:55 crc kubenswrapper[4792]: E0216 23:02:55.029365 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:02:55 crc kubenswrapper[4792]: E0216 23:02:55.029390 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:02:56 crc kubenswrapper[4792]: I0216 23:02:56.312693 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsjfm" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" probeResult="failure" output=< Feb 16 23:02:56 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 23:02:56 crc kubenswrapper[4792]: > Feb 16 23:02:57 crc kubenswrapper[4792]: E0216 23:02:57.028565 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:03:05 crc kubenswrapper[4792]: I0216 23:03:05.311720 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:03:05 crc kubenswrapper[4792]: I0216 23:03:05.375084 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:03:05 crc kubenswrapper[4792]: I0216 23:03:05.566114 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:03:06 crc kubenswrapper[4792]: I0216 23:03:06.650662 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsjfm" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" containerID="cri-o://cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d" gracePeriod=2 Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.200616 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.257009 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content\") pod \"3e8d8740-efea-47e9-866a-debe317ff9f6\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.257203 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities\") pod \"3e8d8740-efea-47e9-866a-debe317ff9f6\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.257263 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zf6z\" (UniqueName: \"kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z\") pod \"3e8d8740-efea-47e9-866a-debe317ff9f6\" (UID: \"3e8d8740-efea-47e9-866a-debe317ff9f6\") " Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.258002 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities" (OuterVolumeSpecName: "utilities") pod "3e8d8740-efea-47e9-866a-debe317ff9f6" (UID: "3e8d8740-efea-47e9-866a-debe317ff9f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.259025 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.267933 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z" (OuterVolumeSpecName: "kube-api-access-7zf6z") pod "3e8d8740-efea-47e9-866a-debe317ff9f6" (UID: "3e8d8740-efea-47e9-866a-debe317ff9f6"). InnerVolumeSpecName "kube-api-access-7zf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.361720 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zf6z\" (UniqueName: \"kubernetes.io/projected/3e8d8740-efea-47e9-866a-debe317ff9f6-kube-api-access-7zf6z\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.402814 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e8d8740-efea-47e9-866a-debe317ff9f6" (UID: "3e8d8740-efea-47e9-866a-debe317ff9f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.463743 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8d8740-efea-47e9-866a-debe317ff9f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.664732 4792 generic.go:334] "Generic (PLEG): container finished" podID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerID="cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d" exitCode=0 Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.664810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerDied","Data":"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d"} Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.664860 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsjfm" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.664882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsjfm" event={"ID":"3e8d8740-efea-47e9-866a-debe317ff9f6","Type":"ContainerDied","Data":"32513b8038d1dc532efc89e229de159c79f6e67ce03c3e0c430856699ea36a57"} Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.664916 4792 scope.go:117] "RemoveContainer" containerID="cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.710353 4792 scope.go:117] "RemoveContainer" containerID="9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.712946 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.731599 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsjfm"] Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.741266 4792 scope.go:117] "RemoveContainer" containerID="c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.813778 4792 scope.go:117] "RemoveContainer" containerID="cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d" Feb 16 23:03:07 crc kubenswrapper[4792]: E0216 23:03:07.814345 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d\": container with ID starting with cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d not found: ID does not exist" containerID="cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.814385 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d"} err="failed to get container status \"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d\": rpc error: code = NotFound desc = could not find container \"cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d\": container with ID starting with cf87ac131554c0842132d9a0c7ea4a305d0282864cb2fc8578c75dd95ab5a72d not found: ID does not exist" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.814411 4792 scope.go:117] "RemoveContainer" containerID="9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852" Feb 16 23:03:07 crc kubenswrapper[4792]: E0216 23:03:07.814833 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852\": container with ID starting with 9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852 not found: ID does not exist" containerID="9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.814867 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852"} err="failed to get container status \"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852\": rpc error: code = NotFound desc = could not find container \"9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852\": container with ID starting with 9683efd6119eaa3f29b7e6c97656949e4e72930ad78ac7c09c20d9478d383852 not found: ID does not exist" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.814898 4792 scope.go:117] "RemoveContainer" containerID="c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6" Feb 16 23:03:07 crc kubenswrapper[4792]: E0216 23:03:07.815434 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6\": container with ID starting with c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6 not found: ID does not exist" containerID="c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6" Feb 16 23:03:07 crc kubenswrapper[4792]: I0216 23:03:07.815465 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6"} err="failed to get container status \"c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6\": rpc error: code = NotFound desc = could not find container \"c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6\": container with ID starting with c5696003f0826622d7aacd9932dcc030d2a8a42171865d78fa4f0b3a89940fb6 not found: ID does not exist" Feb 16 23:03:08 crc kubenswrapper[4792]: I0216 23:03:08.037502 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:03:08 crc kubenswrapper[4792]: E0216 23:03:08.038205 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:03:08 crc kubenswrapper[4792]: E0216 23:03:08.047160 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:03:08 crc kubenswrapper[4792]: I0216 23:03:08.056875 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" path="/var/lib/kubelet/pods/3e8d8740-efea-47e9-866a-debe317ff9f6/volumes" Feb 16 23:03:12 crc kubenswrapper[4792]: E0216 23:03:12.028193 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:03:20 crc kubenswrapper[4792]: I0216 23:03:20.027833 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:03:20 crc kubenswrapper[4792]: E0216 23:03:20.029429 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:03:21 crc kubenswrapper[4792]: E0216 23:03:21.028527 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:03:26 crc kubenswrapper[4792]: E0216 23:03:26.028821 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.147457 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcr5s/must-gather-8ttm4"] Feb 16 23:03:28 crc kubenswrapper[4792]: E0216 23:03:28.149288 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.149322 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" Feb 16 23:03:28 crc kubenswrapper[4792]: E0216 23:03:28.149341 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="extract-utilities" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.149353 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="extract-utilities" Feb 16 23:03:28 crc kubenswrapper[4792]: E0216 23:03:28.149370 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="extract-content" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.149381 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="extract-content" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.149769 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d8740-efea-47e9-866a-debe317ff9f6" containerName="registry-server" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.151553 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.154276 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcr5s"/"openshift-service-ca.crt" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.154278 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qcr5s"/"default-dockercfg-w87tl" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.154405 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcr5s"/"kube-root-ca.crt" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.166509 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcr5s/must-gather-8ttm4"] Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.247061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.247623 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgm9\" (UniqueName: \"kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.350474 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgm9\" (UniqueName: \"kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.350696 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.351243 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.621480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgm9\" (UniqueName: \"kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9\") pod \"must-gather-8ttm4\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:28 crc kubenswrapper[4792]: I0216 23:03:28.776181 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:03:29 crc kubenswrapper[4792]: I0216 23:03:29.277133 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcr5s/must-gather-8ttm4"] Feb 16 23:03:29 crc kubenswrapper[4792]: I0216 23:03:29.899168 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" event={"ID":"6878f63d-35aa-4e64-b246-f3b6395d0383","Type":"ContainerStarted","Data":"8f794c69cd59d285d3d43a609caf72efacc1dd227d31201c3675f4379cc36514"} Feb 16 23:03:33 crc kubenswrapper[4792]: I0216 23:03:33.026720 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:03:33 crc kubenswrapper[4792]: E0216 23:03:33.027998 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:03:33 crc kubenswrapper[4792]: E0216 23:03:33.038292 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:03:39 crc kubenswrapper[4792]: I0216 23:03:39.035498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" event={"ID":"6878f63d-35aa-4e64-b246-f3b6395d0383","Type":"ContainerStarted","Data":"a9271518297ad0b26e37a2f6200f9e4a4a10064a9ef519ed19c9bc2513205c6e"} Feb 16 23:03:39 crc kubenswrapper[4792]: I0216 23:03:39.036256 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" event={"ID":"6878f63d-35aa-4e64-b246-f3b6395d0383","Type":"ContainerStarted","Data":"9a8cf0d6f221fc2970f86625c8e1be47e9ee05ec002fb25d23f490965c276a94"} Feb 16 23:03:39 crc kubenswrapper[4792]: I0216 23:03:39.058169 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" podStartSLOduration=2.17455343 podStartE2EDuration="11.058144678s" podCreationTimestamp="2026-02-16 23:03:28 +0000 UTC" firstStartedPulling="2026-02-16 23:03:29.280210116 +0000 UTC m=+5141.933489017" lastFinishedPulling="2026-02-16 23:03:38.163801364 +0000 UTC m=+5150.817080265" observedRunningTime="2026-02-16 23:03:39.049737751 +0000 UTC m=+5151.703016642" watchObservedRunningTime="2026-02-16 23:03:39.058144678 +0000 UTC m=+5151.711423569" Feb 16 23:03:40 crc kubenswrapper[4792]: E0216 23:03:40.037438 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.489410 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-l4wpt"] Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.491887 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.612243 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.612664 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47mk\" (UniqueName: \"kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.715463 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47mk\" (UniqueName: \"kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.715625 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.715740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.733816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47mk\" (UniqueName: \"kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk\") pod \"crc-debug-l4wpt\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:43 crc kubenswrapper[4792]: I0216 23:03:43.810706 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:03:44 crc kubenswrapper[4792]: I0216 23:03:44.097310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" event={"ID":"13881a88-d5bb-47df-91c6-83bfeea96292","Type":"ContainerStarted","Data":"a5f5c657652f3a61f3c5f531afa85e1096b23a11de3ca9e53cd1dd965c1b295c"} Feb 16 23:03:46 crc kubenswrapper[4792]: E0216 23:03:46.041865 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:03:48 crc kubenswrapper[4792]: I0216 23:03:48.046507 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:03:48 crc kubenswrapper[4792]: E0216 23:03:48.047086 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:03:54 crc kubenswrapper[4792]: E0216 23:03:54.028544 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:03:57 crc kubenswrapper[4792]: I0216 23:03:57.259396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" event={"ID":"13881a88-d5bb-47df-91c6-83bfeea96292","Type":"ContainerStarted","Data":"5d8ab09807b29e622f2074777fff97dd12a2466b1bf2ccf19b4baf7ef795449b"} Feb 16 23:03:57 crc kubenswrapper[4792]: I0216 23:03:57.277820 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" podStartSLOduration=1.310095746 podStartE2EDuration="14.277798717s" podCreationTimestamp="2026-02-16 23:03:43 +0000 UTC" firstStartedPulling="2026-02-16 23:03:43.872098541 +0000 UTC m=+5156.525377432" lastFinishedPulling="2026-02-16 23:03:56.839801502 +0000 UTC m=+5169.493080403" observedRunningTime="2026-02-16 23:03:57.271545428 +0000 UTC m=+5169.924824319" watchObservedRunningTime="2026-02-16 23:03:57.277798717 +0000 UTC m=+5169.931077628" Feb 16 23:03:59 crc kubenswrapper[4792]: I0216 23:03:59.027219 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:03:59 crc kubenswrapper[4792]: E0216 23:03:59.028409 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:04:01 crc kubenswrapper[4792]: E0216 23:04:01.033073 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:04:07 crc kubenswrapper[4792]: E0216 23:04:07.031505 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:04:10 crc kubenswrapper[4792]: I0216 23:04:10.027348 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:04:10 crc kubenswrapper[4792]: E0216 23:04:10.028297 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:04:15 crc kubenswrapper[4792]: I0216 23:04:15.485174 4792 generic.go:334] "Generic (PLEG): container finished" podID="13881a88-d5bb-47df-91c6-83bfeea96292" containerID="5d8ab09807b29e622f2074777fff97dd12a2466b1bf2ccf19b4baf7ef795449b" exitCode=0 Feb 16 23:04:15 crc kubenswrapper[4792]: I0216 23:04:15.485294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" event={"ID":"13881a88-d5bb-47df-91c6-83bfeea96292","Type":"ContainerDied","Data":"5d8ab09807b29e622f2074777fff97dd12a2466b1bf2ccf19b4baf7ef795449b"} Feb 16 23:04:16 crc kubenswrapper[4792]: E0216 23:04:16.028364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.504912 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" event={"ID":"13881a88-d5bb-47df-91c6-83bfeea96292","Type":"ContainerDied","Data":"a5f5c657652f3a61f3c5f531afa85e1096b23a11de3ca9e53cd1dd965c1b295c"} Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.505240 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f5c657652f3a61f3c5f531afa85e1096b23a11de3ca9e53cd1dd965c1b295c" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.516109 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.563494 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-l4wpt"] Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.574206 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-l4wpt"] Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.650767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f47mk\" (UniqueName: \"kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk\") pod \"13881a88-d5bb-47df-91c6-83bfeea96292\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.650971 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host\") pod \"13881a88-d5bb-47df-91c6-83bfeea96292\" (UID: \"13881a88-d5bb-47df-91c6-83bfeea96292\") " Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.651087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host" (OuterVolumeSpecName: "host") pod "13881a88-d5bb-47df-91c6-83bfeea96292" (UID: "13881a88-d5bb-47df-91c6-83bfeea96292"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.651863 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13881a88-d5bb-47df-91c6-83bfeea96292-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.657512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk" (OuterVolumeSpecName: "kube-api-access-f47mk") pod "13881a88-d5bb-47df-91c6-83bfeea96292" (UID: "13881a88-d5bb-47df-91c6-83bfeea96292"). InnerVolumeSpecName "kube-api-access-f47mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:17 crc kubenswrapper[4792]: I0216 23:04:17.754519 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f47mk\" (UniqueName: \"kubernetes.io/projected/13881a88-d5bb-47df-91c6-83bfeea96292-kube-api-access-f47mk\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.038431 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13881a88-d5bb-47df-91c6-83bfeea96292" path="/var/lib/kubelet/pods/13881a88-d5bb-47df-91c6-83bfeea96292/volumes" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.519072 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-l4wpt" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.907121 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-m48c5"] Feb 16 23:04:18 crc kubenswrapper[4792]: E0216 23:04:18.907801 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13881a88-d5bb-47df-91c6-83bfeea96292" containerName="container-00" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.907821 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="13881a88-d5bb-47df-91c6-83bfeea96292" containerName="container-00" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.908038 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="13881a88-d5bb-47df-91c6-83bfeea96292" containerName="container-00" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.908792 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.996153 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5lz\" (UniqueName: \"kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:18 crc kubenswrapper[4792]: I0216 23:04:18.996298 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:19 crc kubenswrapper[4792]: I0216 23:04:19.099321 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5lz\" (UniqueName: \"kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:19 crc kubenswrapper[4792]: I0216 23:04:19.099850 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:19 crc kubenswrapper[4792]: I0216 23:04:19.100032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:19 crc kubenswrapper[4792]: I0216 23:04:19.629784 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5lz\" (UniqueName: \"kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz\") pod \"crc-debug-m48c5\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.021569 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.551180 4792 generic.go:334] "Generic (PLEG): container finished" podID="05b5e454-4154-45db-98e9-857a79576acc" containerID="3ab96ac2e9bc759c2985ad7e78bb596860edb06aa598e7c45a4b13060a02ba6d" exitCode=1 Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.551265 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" event={"ID":"05b5e454-4154-45db-98e9-857a79576acc","Type":"ContainerDied","Data":"3ab96ac2e9bc759c2985ad7e78bb596860edb06aa598e7c45a4b13060a02ba6d"} Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.551571 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" event={"ID":"05b5e454-4154-45db-98e9-857a79576acc","Type":"ContainerStarted","Data":"93993d481db3ade8ea938f5ed21305ea3de43c039bd04fc6ae1f174944e03f1a"} Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.607399 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-m48c5"] Feb 16 23:04:20 crc kubenswrapper[4792]: I0216 23:04:20.618292 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcr5s/crc-debug-m48c5"] Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.028114 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:04:21 crc kubenswrapper[4792]: E0216 23:04:21.028402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:04:21 crc kubenswrapper[4792]: E0216 23:04:21.029264 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.695807 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.776938 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5lz\" (UniqueName: \"kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz\") pod \"05b5e454-4154-45db-98e9-857a79576acc\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.777102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host\") pod \"05b5e454-4154-45db-98e9-857a79576acc\" (UID: \"05b5e454-4154-45db-98e9-857a79576acc\") " Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.778099 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host" (OuterVolumeSpecName: "host") pod "05b5e454-4154-45db-98e9-857a79576acc" (UID: "05b5e454-4154-45db-98e9-857a79576acc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.782681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz" (OuterVolumeSpecName: "kube-api-access-hb5lz") pod "05b5e454-4154-45db-98e9-857a79576acc" (UID: "05b5e454-4154-45db-98e9-857a79576acc"). InnerVolumeSpecName "kube-api-access-hb5lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.880272 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5lz\" (UniqueName: \"kubernetes.io/projected/05b5e454-4154-45db-98e9-857a79576acc-kube-api-access-hb5lz\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4792]: I0216 23:04:21.880305 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05b5e454-4154-45db-98e9-857a79576acc-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:22 crc kubenswrapper[4792]: I0216 23:04:22.042330 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b5e454-4154-45db-98e9-857a79576acc" path="/var/lib/kubelet/pods/05b5e454-4154-45db-98e9-857a79576acc/volumes" Feb 16 23:04:22 crc kubenswrapper[4792]: E0216 23:04:22.156937 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05b5e454_4154_45db_98e9_857a79576acc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05b5e454_4154_45db_98e9_857a79576acc.slice/crio-93993d481db3ade8ea938f5ed21305ea3de43c039bd04fc6ae1f174944e03f1a\": RecentStats: unable to find data in memory cache]" Feb 16 23:04:22 crc kubenswrapper[4792]: I0216 23:04:22.573491 4792 scope.go:117] "RemoveContainer" containerID="3ab96ac2e9bc759c2985ad7e78bb596860edb06aa598e7c45a4b13060a02ba6d" Feb 16 23:04:22 crc kubenswrapper[4792]: I0216 23:04:22.573496 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/crc-debug-m48c5" Feb 16 23:04:31 crc kubenswrapper[4792]: E0216 23:04:31.028113 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:04:32 crc kubenswrapper[4792]: E0216 23:04:32.028911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:04:33 crc kubenswrapper[4792]: I0216 23:04:33.026275 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:04:33 crc kubenswrapper[4792]: E0216 23:04:33.026868 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:04:45 crc kubenswrapper[4792]: I0216 23:04:45.026627 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:04:45 crc kubenswrapper[4792]: E0216 23:04:45.027254 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:04:46 crc kubenswrapper[4792]: E0216 23:04:46.030099 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:04:46 crc kubenswrapper[4792]: E0216 23:04:46.030233 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:05:00 crc kubenswrapper[4792]: I0216 23:05:00.026952 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.027899 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.028894 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:05:00 crc kubenswrapper[4792]: I0216 23:05:00.029318 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.160898 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.161276 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.161442 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 23:05:00 crc kubenswrapper[4792]: E0216 23:05:00.162701 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:05:12 crc kubenswrapper[4792]: E0216 23:05:12.029437 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:05:13 crc kubenswrapper[4792]: E0216 23:05:13.110572 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:05:13 crc kubenswrapper[4792]: E0216 23:05:13.110860 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:05:13 crc kubenswrapper[4792]: E0216 23:05:13.110974 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 23:05:13 crc kubenswrapper[4792]: E0216 23:05:13.112194 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:05:15 crc kubenswrapper[4792]: I0216 23:05:15.026882 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:05:15 crc kubenswrapper[4792]: E0216 23:05:15.028333 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:05:25 crc kubenswrapper[4792]: E0216 23:05:25.032011 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:05:26 crc kubenswrapper[4792]: I0216 23:05:26.946808 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7d172284-1441-400a-bbf6-ba8574621533/aodh-api/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.026892 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:05:27 crc kubenswrapper[4792]: E0216 23:05:27.027453 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:05:27 crc kubenswrapper[4792]: E0216 23:05:27.028877 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.155919 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7d172284-1441-400a-bbf6-ba8574621533/aodh-evaluator/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.168974 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7d172284-1441-400a-bbf6-ba8574621533/aodh-listener/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.183930 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7d172284-1441-400a-bbf6-ba8574621533/aodh-notifier/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.317872 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-698d56d666-pskd9_aba20562-d0b4-4de1-acaa-d0968fddb399/barbican-api/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.387160 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-698d56d666-pskd9_aba20562-d0b4-4de1-acaa-d0968fddb399/barbican-api-log/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.504109 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d878f6fc4-w97vq_0ff184ef-0e19-471a-b3b1-38e321e576cd/barbican-keystone-listener/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.582221 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d878f6fc4-w97vq_0ff184ef-0e19-471a-b3b1-38e321e576cd/barbican-keystone-listener-log/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.806958 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676b487647-vn2d7_a098cc94-e931-444d-a61b-6d2c8e32f435/barbican-worker/0.log" Feb 16 23:05:27 crc kubenswrapper[4792]: I0216 23:05:27.967852 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676b487647-vn2d7_a098cc94-e931-444d-a61b-6d2c8e32f435/barbican-worker-log/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.048354 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dvldc_425f7d1f-0118-4ce5-95f5-a6f2a336dfa8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.264699 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e58723ee-d9c2-4b71-b072-3cf7b2a26c12/ceilometer-notification-agent/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.311829 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e58723ee-d9c2-4b71-b072-3cf7b2a26c12/proxy-httpd/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.393307 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e58723ee-d9c2-4b71-b072-3cf7b2a26c12/sg-core/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.501294 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d0993d32-4203-4fa0-a527-917981f0348d/cinder-api/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.545801 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d0993d32-4203-4fa0-a527-917981f0348d/cinder-api-log/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.723904 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b1584d19-127a-4d77-8e66-3096a62ae789/cinder-scheduler/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.756727 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b1584d19-127a-4d77-8e66-3096a62ae789/probe/0.log" Feb 16 23:05:28 crc kubenswrapper[4792]: I0216 23:05:28.857016 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-5jl4c_1e5abd0c-4ca2-460c-a47f-a057371692d2/init/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.036324 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-5jl4c_1e5abd0c-4ca2-460c-a47f-a057371692d2/init/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.048773 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-5jl4c_1e5abd0c-4ca2-460c-a47f-a057371692d2/dnsmasq-dns/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.075361 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7fgqp_65f41687-f567-41a0-8ec2-3ac03e464ebe/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.265863 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bk8qg_01a8f572-f295-493c-aad8-417b6ca06b03/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.346701 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-cqlsd_3b2e7368-cabe-42cf-8b3f-8e6b743e8bba/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.485333 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dzz2p_e500e093-7b90-49a9-ae41-03f88648baa6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.566268 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hlz9g_79c18359-29ae-4f68-aee4-ada05c949dfd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.737827 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-k8djj_1fd88c0f-2daa-4b0f-b372-141a953ab8b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.782245 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n6zsm_e792897f-1081-40d9-8e65-3f3ac21cd119/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.944474 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa4253d-0a12-4f95-a89e-ab8cf0507ded/glance-httpd/0.log" Feb 16 23:05:29 crc kubenswrapper[4792]: I0216 23:05:29.980542 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa4253d-0a12-4f95-a89e-ab8cf0507ded/glance-log/0.log" Feb 16 23:05:30 crc kubenswrapper[4792]: I0216 23:05:30.177407 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38/glance-httpd/0.log" Feb 16 23:05:30 crc kubenswrapper[4792]: I0216 23:05:30.269294 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_35e0fd4b-d939-49a5-8c5e-3a5ddd6a4d38/glance-log/0.log" Feb 16 23:05:30 crc kubenswrapper[4792]: I0216 23:05:30.720104 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-789d9b5ffd-kgfxb_9159f697-7cfe-428b-8146-9fa0bab94592/heat-api/0.log" Feb 16 23:05:30 crc kubenswrapper[4792]: I0216 23:05:30.933088 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-dcdcd9bbc-f9nr2_1ded7fb3-2456-4230-ace6-8786c6b9fd4e/heat-engine/0.log" Feb 16 23:05:30 crc kubenswrapper[4792]: I0216 23:05:30.968354 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-fdc6c774c-p5p85_2b3f7c55-8515-478d-bd01-a18403a7116b/heat-cfnapi/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.101305 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5978f67fb4-lxqn8_66dc0f43-b1f3-4acc-a189-5d4df2f08aeb/keystone-api/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.149687 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29521321-bn56l_f21375f1-ace7-4a32-aaa7-eb7752bc5ffd/keystone-cron/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.266126 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29521381-km4sq_472704bc-8a94-4472-aaf0-b7527cfeb102/keystone-cron/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.328158 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dd434b09-606a-45c0-8b54-2fbf907587f7/kube-state-metrics/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.550221 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_b3131b03-f776-460c-9bd4-61398b8ba27a/mysqld-exporter/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.700700 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58f4767d9c-gk2k8_1a645a10-4e7b-42ed-a764-9cafab1d6086/neutron-api/0.log" Feb 16 23:05:31 crc kubenswrapper[4792]: I0216 23:05:31.761988 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58f4767d9c-gk2k8_1a645a10-4e7b-42ed-a764-9cafab1d6086/neutron-httpd/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.083039 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6/nova-api-log/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.207045 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2c87f02a-122e-4d95-8c0f-f4e8a17450a3/nova-cell0-conductor-conductor/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.514961 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1c02c9b2-0bc4-4417-8f78-e31791c9d8d6/nova-cell1-conductor-conductor/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.618423 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f1d1ff0-9d6e-43e7-9cef-a5b8c6bb79c6/nova-api-api/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.621515 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_24be6f91-f4d4-44ae-9cf4-17690f27e4be/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 23:05:32 crc kubenswrapper[4792]: I0216 23:05:32.793590 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6bc8f806-8d65-4035-9830-e7bf69083c19/nova-metadata-log/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.045478 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_464ac62e-e668-417e-85ed-f8ddcee7ba19/nova-scheduler-scheduler/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.144938 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07ce522d-6acb-4c52-aa4a-5997916ce345/mysql-bootstrap/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.351003 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07ce522d-6acb-4c52-aa4a-5997916ce345/galera/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.402663 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_07ce522d-6acb-4c52-aa4a-5997916ce345/mysql-bootstrap/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.552959 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce68e433-fd1b-4a65-84e2-33ecf84fc4ea/mysql-bootstrap/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.749392 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce68e433-fd1b-4a65-84e2-33ecf84fc4ea/mysql-bootstrap/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.807928 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce68e433-fd1b-4a65-84e2-33ecf84fc4ea/galera/0.log" Feb 16 23:05:33 crc kubenswrapper[4792]: I0216 23:05:33.927146 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7a688f5f-10e0-42eb-863d-c8f919b2e3f5/openstackclient/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.010219 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5q4gs_fc8ee070-8557-4708-a58f-7e5899ed206b/ovn-controller/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.206949 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rzhpq_76a50771-3519-451e-af83-32d1da662062/openstack-network-exporter/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.414402 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cfzsw_60d2ecc7-d6a4-4c05-be72-ee4df484e081/ovsdb-server-init/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.640979 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cfzsw_60d2ecc7-d6a4-4c05-be72-ee4df484e081/ovsdb-server-init/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.678110 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cfzsw_60d2ecc7-d6a4-4c05-be72-ee4df484e081/ovsdb-server/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.692708 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cfzsw_60d2ecc7-d6a4-4c05-be72-ee4df484e081/ovs-vswitchd/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.813956 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6bc8f806-8d65-4035-9830-e7bf69083c19/nova-metadata-metadata/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.876748 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6af85927-1a78-41d9-8d3d-cfef6f7f9d20/openstack-network-exporter/0.log" Feb 16 23:05:34 crc kubenswrapper[4792]: I0216 23:05:34.918458 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6af85927-1a78-41d9-8d3d-cfef6f7f9d20/ovn-northd/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.074713 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9b5affff-971a-4114-9a3a-2bbdace2e7b9/openstack-network-exporter/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.141921 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9b5affff-971a-4114-9a3a-2bbdace2e7b9/ovsdbserver-nb/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.252340 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5891cbfc-31ff-494c-b21c-5de41da698c7/openstack-network-exporter/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.295526 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5891cbfc-31ff-494c-b21c-5de41da698c7/ovsdbserver-sb/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.506659 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9686f857b-mxcsr_616f13af-2b9a-40da-a031-aa421f1ff745/placement-api/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.555012 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9686f857b-mxcsr_616f13af-2b9a-40da-a031-aa421f1ff745/placement-log/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.582426 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ee2931a-9b3b-4568-b83b-9846e6f9c65a/init-config-reloader/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.828668 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ee2931a-9b3b-4568-b83b-9846e6f9c65a/init-config-reloader/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.835769 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ee2931a-9b3b-4568-b83b-9846e6f9c65a/prometheus/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.850205 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ee2931a-9b3b-4568-b83b-9846e6f9c65a/config-reloader/0.log" Feb 16 23:05:35 crc kubenswrapper[4792]: I0216 23:05:35.871701 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ee2931a-9b3b-4568-b83b-9846e6f9c65a/thanos-sidecar/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.056977 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40456664-5897-4d32-b9de-d0d48a06764d/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.294958 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40456664-5897-4d32-b9de-d0d48a06764d/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.308070 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_40456664-5897-4d32-b9de-d0d48a06764d/rabbitmq/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.338430 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd000b08-b38a-4541-959f-e1c3151131d6/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.500019 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd000b08-b38a-4541-959f-e1c3151131d6/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.537869 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd000b08-b38a-4541-959f-e1c3151131d6/rabbitmq/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.589630 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_37d607c0-fb36-4635-9e83-4e07cd4906ff/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.905331 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_37d607c0-fb36-4635-9e83-4e07cd4906ff/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.931919 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ba92392-a8a9-40c9-9b0a-d35179a63c16/setup-container/0.log" Feb 16 23:05:36 crc kubenswrapper[4792]: I0216 23:05:36.943565 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_37d607c0-fb36-4635-9e83-4e07cd4906ff/rabbitmq/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.130524 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ba92392-a8a9-40c9-9b0a-d35179a63c16/setup-container/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.188591 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-478c7_c1a6b3ea-b10b-44b1-a26a-f9df8972529c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.218088 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ba92392-a8a9-40c9-9b0a-d35179a63c16/rabbitmq/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.470196 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-t4h8s_c1fd7643-19c7-4d63-a36e-06ea1ff7d3eb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.698207 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d7f78dd75-dlmv8_633c7466-7045-47d2-906d-0d9881501baa/proxy-httpd/0.log" Feb 16 23:05:37 crc kubenswrapper[4792]: I0216 23:05:37.866446 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d7f78dd75-dlmv8_633c7466-7045-47d2-906d-0d9881501baa/proxy-server/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.014440 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qlqfk_bebd5c80-d002-49e6-ac52-d1d323b83801/swift-ring-rebalance/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: E0216 23:05:38.045876 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.104210 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/account-auditor/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.296977 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/account-reaper/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.325291 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/container-auditor/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.328286 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/account-replicator/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.371176 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/account-server/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.537344 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/container-updater/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.537934 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/container-replicator/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.581767 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/object-auditor/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.587356 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/container-server/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.730742 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/object-expirer/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.737170 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/object-replicator/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.895322 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/object-server/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.897487 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/object-updater/0.log" Feb 16 23:05:38 crc kubenswrapper[4792]: I0216 23:05:38.989907 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/rsync/0.log" Feb 16 23:05:39 crc kubenswrapper[4792]: I0216 23:05:39.016542 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e2ada762-95ad-4810-b5da-b4ca59652a45/swift-recon-cron/0.log" Feb 16 23:05:41 crc kubenswrapper[4792]: I0216 23:05:41.025933 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:05:41 crc kubenswrapper[4792]: E0216 23:05:41.026711 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:05:42 crc kubenswrapper[4792]: E0216 23:05:42.027965 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:05:43 crc kubenswrapper[4792]: I0216 23:05:43.312434 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_356c7c8e-30ec-45a3-a276-b8cca48b4774/memcached/0.log" Feb 16 23:05:52 crc kubenswrapper[4792]: E0216 23:05:52.029670 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:05:54 crc kubenswrapper[4792]: I0216 23:05:54.026619 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:05:54 crc kubenswrapper[4792]: E0216 23:05:54.027279 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:05:54 crc kubenswrapper[4792]: E0216 23:05:54.028852 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.776531 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:02 crc kubenswrapper[4792]: E0216 23:06:02.777495 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b5e454-4154-45db-98e9-857a79576acc" containerName="container-00" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.777508 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b5e454-4154-45db-98e9-857a79576acc" containerName="container-00" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.777773 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b5e454-4154-45db-98e9-857a79576acc" containerName="container-00" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.779485 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.788015 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.881934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfxj\" (UniqueName: \"kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.882117 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.882329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.985213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfxj\" (UniqueName: \"kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.985285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.985353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.987120 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:02 crc kubenswrapper[4792]: I0216 23:06:02.989755 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:03 crc kubenswrapper[4792]: I0216 23:06:03.009154 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfxj\" (UniqueName: \"kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj\") pod \"certified-operators-qcdvq\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:03 crc kubenswrapper[4792]: I0216 23:06:03.114668 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:03 crc kubenswrapper[4792]: I0216 23:06:03.741700 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:04 crc kubenswrapper[4792]: I0216 23:06:04.671247 4792 generic.go:334] "Generic (PLEG): container finished" podID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerID="7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb" exitCode=0 Feb 16 23:06:04 crc kubenswrapper[4792]: I0216 23:06:04.671348 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerDied","Data":"7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb"} Feb 16 23:06:04 crc kubenswrapper[4792]: I0216 23:06:04.671532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerStarted","Data":"885cd28d18372be143a57e8126f9649bbec4922546b70c5564da89c0d4731613"} Feb 16 23:06:06 crc kubenswrapper[4792]: E0216 23:06:06.028188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:06:06 crc kubenswrapper[4792]: I0216 23:06:06.694521 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerStarted","Data":"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3"} Feb 16 23:06:07 crc kubenswrapper[4792]: I0216 23:06:07.705507 4792 generic.go:334] "Generic (PLEG): container finished" podID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerID="d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3" exitCode=0 Feb 16 23:06:07 crc kubenswrapper[4792]: I0216 23:06:07.705621 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerDied","Data":"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3"} Feb 16 23:06:08 crc kubenswrapper[4792]: I0216 23:06:08.035764 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:06:08 crc kubenswrapper[4792]: I0216 23:06:08.717929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerStarted","Data":"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064"} Feb 16 23:06:08 crc kubenswrapper[4792]: I0216 23:06:08.720359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a"} Feb 16 23:06:08 crc kubenswrapper[4792]: I0216 23:06:08.744169 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcdvq" podStartSLOduration=3.319422114 podStartE2EDuration="6.744147957s" podCreationTimestamp="2026-02-16 23:06:02 +0000 UTC" firstStartedPulling="2026-02-16 23:06:04.67328755 +0000 UTC m=+5297.326566441" lastFinishedPulling="2026-02-16 23:06:08.098013393 +0000 UTC m=+5300.751292284" observedRunningTime="2026-02-16 23:06:08.741473134 +0000 UTC m=+5301.394752035" watchObservedRunningTime="2026-02-16 23:06:08.744147957 +0000 UTC m=+5301.397426868" Feb 16 23:06:09 crc kubenswrapper[4792]: E0216 23:06:09.028243 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:06:11 crc kubenswrapper[4792]: I0216 23:06:11.592487 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/util/0.log" Feb 16 23:06:11 crc kubenswrapper[4792]: I0216 23:06:11.808712 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/pull/0.log" Feb 16 23:06:11 crc kubenswrapper[4792]: I0216 23:06:11.824531 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/util/0.log" Feb 16 23:06:11 crc kubenswrapper[4792]: I0216 23:06:11.853481 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/pull/0.log" Feb 16 23:06:12 crc kubenswrapper[4792]: I0216 23:06:12.003527 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/util/0.log" Feb 16 23:06:12 crc kubenswrapper[4792]: I0216 23:06:12.023147 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/pull/0.log" Feb 16 23:06:12 crc kubenswrapper[4792]: I0216 23:06:12.097014 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a10786088qrl7_68b216db-f03f-4138-a015-d41cb53a6492/extract/0.log" Feb 16 23:06:12 crc kubenswrapper[4792]: I0216 23:06:12.520251 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bdq8l_f40e7a2f-83ba-4c6d-87e6-35ef8ce1638f/manager/0.log" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.115527 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.115570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.170408 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.380377 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-68zdd_e79f0a7a-0416-4cbe-b6ec-c52db85aae80/manager/0.log" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.810808 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.839816 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kwchw_14a0a678-34ee-46ea-97b2-dda55282c312/manager/0.log" Feb 16 23:06:13 crc kubenswrapper[4792]: I0216 23:06:13.882362 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:14 crc kubenswrapper[4792]: I0216 23:06:14.070564 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-c7g29_2c61991d-c4f0-4ac4-81af-951bbb318042/manager/0.log" Feb 16 23:06:14 crc kubenswrapper[4792]: I0216 23:06:14.623893 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5jfgv_3552825c-be0d-4a97-9caf-f8a1ceb96564/manager/0.log" Feb 16 23:06:14 crc kubenswrapper[4792]: I0216 23:06:14.899814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-d52s2_0ca1643f-fcdd-4500-b446-06862c80c736/manager/0.log" Feb 16 23:06:15 crc kubenswrapper[4792]: I0216 23:06:15.207681 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-n9g6q_8b18ef30-f020-4cf7-8068-69f90696ac66/manager/0.log" Feb 16 23:06:15 crc kubenswrapper[4792]: I0216 23:06:15.221864 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-q68hm_0031ef47-8c9b-43e3-8484-f1400d13b1c0/manager/0.log" Feb 16 23:06:15 crc kubenswrapper[4792]: I0216 23:06:15.732161 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-xl8k2_bd4eda7b-78cc-4c87-9210-6c9581ad3fab/manager/0.log" Feb 16 23:06:15 crc kubenswrapper[4792]: I0216 23:06:15.742801 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-gsjf4_16470449-37c4-419d-8932-f0c7ee201aaa/manager/0.log" Feb 16 23:06:15 crc kubenswrapper[4792]: I0216 23:06:15.781717 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qcdvq" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="registry-server" containerID="cri-o://d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064" gracePeriod=2 Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.050830 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-bzg6v_bd719b4e-7fbb-48d2-ab0f-3a0257fe4070/manager/0.log" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.196939 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-8fcb2_47b9a9f7-c72f-45ae-96ea-1e8b19065304/manager/0.log" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.467700 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.486374 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cfp8tw_6d7fec09-c983-4893-b691-10fec0ee2206/manager/0.log" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.607003 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content\") pod \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.607334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjfxj\" (UniqueName: \"kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj\") pod \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.607395 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities\") pod \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\" (UID: \"f01e13d1-5288-41c2-b8c5-216ead7bb36a\") " Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.613890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities" (OuterVolumeSpecName: "utilities") pod "f01e13d1-5288-41c2-b8c5-216ead7bb36a" (UID: "f01e13d1-5288-41c2-b8c5-216ead7bb36a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.631393 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj" (OuterVolumeSpecName: "kube-api-access-tjfxj") pod "f01e13d1-5288-41c2-b8c5-216ead7bb36a" (UID: "f01e13d1-5288-41c2-b8c5-216ead7bb36a"). InnerVolumeSpecName "kube-api-access-tjfxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.703867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f01e13d1-5288-41c2-b8c5-216ead7bb36a" (UID: "f01e13d1-5288-41c2-b8c5-216ead7bb36a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.710481 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.710508 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjfxj\" (UniqueName: \"kubernetes.io/projected/f01e13d1-5288-41c2-b8c5-216ead7bb36a-kube-api-access-tjfxj\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.710519 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01e13d1-5288-41c2-b8c5-216ead7bb36a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.792160 4792 generic.go:334] "Generic (PLEG): container finished" podID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerID="d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064" exitCode=0 Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.792204 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerDied","Data":"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064"} Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.792231 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcdvq" event={"ID":"f01e13d1-5288-41c2-b8c5-216ead7bb36a","Type":"ContainerDied","Data":"885cd28d18372be143a57e8126f9649bbec4922546b70c5564da89c0d4731613"} Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.792249 4792 scope.go:117] "RemoveContainer" containerID="d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.792257 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcdvq" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.826889 4792 scope.go:117] "RemoveContainer" containerID="d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.837404 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.850635 4792 scope.go:117] "RemoveContainer" containerID="7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.857205 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qcdvq"] Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.898288 4792 scope.go:117] "RemoveContainer" containerID="d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064" Feb 16 23:06:16 crc kubenswrapper[4792]: E0216 23:06:16.899929 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064\": container with ID starting with d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064 not found: ID does not exist" containerID="d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.899963 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064"} err="failed to get container status \"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064\": rpc error: code = NotFound desc = could not find container \"d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064\": container with ID starting with d6d41de93857c8f97d5a329298bc21092a4b7efc854fe16b091ac7f19f6df064 not found: ID does not exist" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.899984 4792 scope.go:117] "RemoveContainer" containerID="d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3" Feb 16 23:06:16 crc kubenswrapper[4792]: E0216 23:06:16.901186 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3\": container with ID starting with d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3 not found: ID does not exist" containerID="d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.901299 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3"} err="failed to get container status \"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3\": rpc error: code = NotFound desc = could not find container \"d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3\": container with ID starting with d024d9f81609246127c93317e5566bf50c067b29824301b7475b86f6ea0e16c3 not found: ID does not exist" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.901490 4792 scope.go:117] "RemoveContainer" containerID="7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb" Feb 16 23:06:16 crc kubenswrapper[4792]: E0216 23:06:16.901794 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb\": container with ID starting with 7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb not found: ID does not exist" containerID="7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.901891 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb"} err="failed to get container status \"7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb\": rpc error: code = NotFound desc = could not find container \"7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb\": container with ID starting with 7243ad9d4b2c2f4c938cca41e53061c09185d273e33337cd58edec0bc2395dbb not found: ID does not exist" Feb 16 23:06:16 crc kubenswrapper[4792]: I0216 23:06:16.911551 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7845fcf9cf-frtrn_9fe0c3a6-98f0-4c15-926e-b9b4e05711db/operator/0.log" Feb 16 23:06:17 crc kubenswrapper[4792]: I0216 23:06:17.144293 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bmzkd_bf231cc9-0b32-43b0-ad49-55d1b28d977d/registry-server/0.log" Feb 16 23:06:17 crc kubenswrapper[4792]: I0216 23:06:17.499454 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-8qm72_7bd0c0a5-5844-4906-bafc-1806ca7901a7/manager/0.log" Feb 16 23:06:17 crc kubenswrapper[4792]: I0216 23:06:17.685691 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-ld8dz_545d4d3f-7ef6-413d-a879-59591fbb7f16/manager/0.log" Feb 16 23:06:17 crc kubenswrapper[4792]: I0216 23:06:17.991949 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6qzwl_63b5bb19-3cd9-4c45-a3a7-8c01e0a2a3ee/operator/0.log" Feb 16 23:06:18 crc kubenswrapper[4792]: I0216 23:06:18.051284 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" path="/var/lib/kubelet/pods/f01e13d1-5288-41c2-b8c5-216ead7bb36a/volumes" Feb 16 23:06:18 crc kubenswrapper[4792]: I0216 23:06:18.296584 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-bxt7g_1afa399d-c3b2-4ad7-a61d-b139e3a975ae/manager/0.log" Feb 16 23:06:18 crc kubenswrapper[4792]: I0216 23:06:18.790901 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-6nlgl_be6b1607-d6a3-4970-80c3-e1368db4877e/manager/0.log" Feb 16 23:06:18 crc kubenswrapper[4792]: I0216 23:06:18.951420 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9c8f544df-6dgqv_4b00b428-3d0e-4120-a21c-7722e529fde5/manager/0.log" Feb 16 23:06:19 crc kubenswrapper[4792]: I0216 23:06:19.096958 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-79996fd568-rkdpn_fe04b110-3ba2-468b-ae82-ae43720f03ad/manager/0.log" Feb 16 23:06:19 crc kubenswrapper[4792]: I0216 23:06:19.183922 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-qc68s_7b4f7a7e-b90d-4210-8254-ae10083bf021/manager/0.log" Feb 16 23:06:19 crc kubenswrapper[4792]: I0216 23:06:19.470254 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-xklb9_8d8bb033-cde2-41c5-9ac9-ea761df10203/manager/0.log" Feb 16 23:06:21 crc kubenswrapper[4792]: E0216 23:06:21.031087 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:06:24 crc kubenswrapper[4792]: E0216 23:06:24.035024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:06:25 crc kubenswrapper[4792]: I0216 23:06:25.771924 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-ckk8x_3198bf1a-e4e7-4f1b-bc18-79581f4cc1c5/manager/0.log" Feb 16 23:06:36 crc kubenswrapper[4792]: E0216 23:06:36.029469 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:06:39 crc kubenswrapper[4792]: E0216 23:06:39.031065 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:06:45 crc kubenswrapper[4792]: I0216 23:06:45.718175 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6btrx_6e2d2b51-afe4-44d1-9c18-0bcef522d6dd/control-plane-machine-set-operator/0.log" Feb 16 23:06:45 crc kubenswrapper[4792]: I0216 23:06:45.876864 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ncn6b_14e13832-467f-4f02-9ded-be8ca6bc6ed2/kube-rbac-proxy/0.log" Feb 16 23:06:45 crc kubenswrapper[4792]: I0216 23:06:45.948004 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ncn6b_14e13832-467f-4f02-9ded-be8ca6bc6ed2/machine-api-operator/0.log" Feb 16 23:06:50 crc kubenswrapper[4792]: E0216 23:06:50.029512 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:06:50 crc kubenswrapper[4792]: E0216 23:06:50.029556 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:01 crc kubenswrapper[4792]: E0216 23:07:01.029466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:07:01 crc kubenswrapper[4792]: I0216 23:07:01.289873 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qdhtx_7507a7a6-6084-469d-a099-a8261994754f/cert-manager-controller/0.log" Feb 16 23:07:01 crc kubenswrapper[4792]: I0216 23:07:01.452547 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-n7j6z_99532456-78ab-4fbd-8aec-6211c50318c2/cert-manager-cainjector/0.log" Feb 16 23:07:01 crc kubenswrapper[4792]: I0216 23:07:01.554219 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-z4dw4_4f130ece-d511-4abe-8198-8629164ab661/cert-manager-webhook/0.log" Feb 16 23:07:03 crc kubenswrapper[4792]: E0216 23:07:03.027677 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:12 crc kubenswrapper[4792]: E0216 23:07:12.027012 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:07:15 crc kubenswrapper[4792]: E0216 23:07:15.031518 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.193859 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-sdtfz_f641b77f-8af3-4104-80c3-e07504d086d1/nmstate-console-plugin/0.log" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.300746 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-llwc8_dd045bc0-e27a-4fc1-808c-dd7aec8fce07/nmstate-handler/0.log" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.414706 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gdhtc_06b05942-626d-480f-bae3-80eafaef0fa5/kube-rbac-proxy/0.log" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.494405 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gdhtc_06b05942-626d-480f-bae3-80eafaef0fa5/nmstate-metrics/0.log" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.615915 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-65zbh_8eb6adaa-1be6-408f-b428-ccdb580dfb6a/nmstate-operator/0.log" Feb 16 23:07:17 crc kubenswrapper[4792]: I0216 23:07:17.693502 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-kk8rg_a0c35ce8-00e1-4421-9a89-a335e12d0d71/nmstate-webhook/0.log" Feb 16 23:07:23 crc kubenswrapper[4792]: E0216 23:07:23.029261 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:07:28 crc kubenswrapper[4792]: E0216 23:07:28.038292 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:31 crc kubenswrapper[4792]: I0216 23:07:31.908173 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c9d97fb5-j4f5p_e2d0a7d0-53d6-4031-894c-734f67974527/kube-rbac-proxy/0.log" Feb 16 23:07:31 crc kubenswrapper[4792]: I0216 23:07:31.909809 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c9d97fb5-j4f5p_e2d0a7d0-53d6-4031-894c-734f67974527/manager/0.log" Feb 16 23:07:37 crc kubenswrapper[4792]: E0216 23:07:37.028909 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:07:42 crc kubenswrapper[4792]: E0216 23:07:42.030316 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.909748 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:42 crc kubenswrapper[4792]: E0216 23:07:42.910223 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="extract-utilities" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.910241 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="extract-utilities" Feb 16 23:07:42 crc kubenswrapper[4792]: E0216 23:07:42.910255 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="extract-content" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.910262 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="extract-content" Feb 16 23:07:42 crc kubenswrapper[4792]: E0216 23:07:42.910287 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="registry-server" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.910296 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="registry-server" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.910533 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e13d1-5288-41c2-b8c5-216ead7bb36a" containerName="registry-server" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.913702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.940739 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.953410 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvtc\" (UniqueName: \"kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.953670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:42 crc kubenswrapper[4792]: I0216 23:07:42.953741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.056637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.056712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.056896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvtc\" (UniqueName: \"kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.057195 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.057304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.082237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvtc\" (UniqueName: \"kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc\") pod \"redhat-marketplace-rdwkf\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.232412 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:43 crc kubenswrapper[4792]: I0216 23:07:43.781394 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:44 crc kubenswrapper[4792]: I0216 23:07:44.744881 4792 generic.go:334] "Generic (PLEG): container finished" podID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerID="b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85" exitCode=0 Feb 16 23:07:44 crc kubenswrapper[4792]: I0216 23:07:44.745004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerDied","Data":"b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85"} Feb 16 23:07:44 crc kubenswrapper[4792]: I0216 23:07:44.745202 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerStarted","Data":"ef58dfb7316bd06be818b09217aa91b33174f685912e77bf935b554de1226a5f"} Feb 16 23:07:45 crc kubenswrapper[4792]: I0216 23:07:45.755056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerStarted","Data":"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd"} Feb 16 23:07:46 crc kubenswrapper[4792]: I0216 23:07:46.110905 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-785cg_cc1404e2-49f6-48df-99fc-24b7b05b5e33/prometheus-operator/0.log" Feb 16 23:07:46 crc kubenswrapper[4792]: I0216 23:07:46.549274 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_2899a7e8-f5fa-4879-9df7-ba57ae9f4262/prometheus-operator-admission-webhook/0.log" Feb 16 23:07:46 crc kubenswrapper[4792]: I0216 23:07:46.554743 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_e173d96c-280b-4293-ae21-272cce1b11bc/prometheus-operator-admission-webhook/0.log" Feb 16 23:07:46 crc kubenswrapper[4792]: I0216 23:07:46.790554 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8nqmm_6dea83c6-c1d5-4b8e-a70c-3184a366721a/observability-ui-dashboards/0.log" Feb 16 23:07:46 crc kubenswrapper[4792]: I0216 23:07:46.798811 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7sqrb_85d29954-608f-4bb5-805e-5ac6d45b6652/operator/0.log" Feb 16 23:07:47 crc kubenswrapper[4792]: I0216 23:07:47.001814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7jr7l_f912b10c-80d1-4667-b807-45a54e626fbe/perses-operator/0.log" Feb 16 23:07:47 crc kubenswrapper[4792]: I0216 23:07:47.778200 4792 generic.go:334] "Generic (PLEG): container finished" podID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerID="5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd" exitCode=0 Feb 16 23:07:47 crc kubenswrapper[4792]: I0216 23:07:47.778282 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerDied","Data":"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd"} Feb 16 23:07:48 crc kubenswrapper[4792]: I0216 23:07:48.789279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerStarted","Data":"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64"} Feb 16 23:07:48 crc kubenswrapper[4792]: I0216 23:07:48.808008 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdwkf" podStartSLOduration=3.404050355 podStartE2EDuration="6.807991727s" podCreationTimestamp="2026-02-16 23:07:42 +0000 UTC" firstStartedPulling="2026-02-16 23:07:44.746425181 +0000 UTC m=+5397.399704072" lastFinishedPulling="2026-02-16 23:07:48.150366553 +0000 UTC m=+5400.803645444" observedRunningTime="2026-02-16 23:07:48.807251277 +0000 UTC m=+5401.460530188" watchObservedRunningTime="2026-02-16 23:07:48.807991727 +0000 UTC m=+5401.461270628" Feb 16 23:07:50 crc kubenswrapper[4792]: E0216 23:07:50.029412 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:07:53 crc kubenswrapper[4792]: E0216 23:07:53.029136 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:07:53 crc kubenswrapper[4792]: I0216 23:07:53.232622 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:53 crc kubenswrapper[4792]: I0216 23:07:53.232695 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:53 crc kubenswrapper[4792]: I0216 23:07:53.295239 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:53 crc kubenswrapper[4792]: I0216 23:07:53.894499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:53 crc kubenswrapper[4792]: I0216 23:07:53.979225 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:55 crc kubenswrapper[4792]: I0216 23:07:55.861918 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdwkf" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="registry-server" containerID="cri-o://188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64" gracePeriod=2 Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.503637 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.603482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvtc\" (UniqueName: \"kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc\") pod \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.603566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities\") pod \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.603611 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content\") pod \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\" (UID: \"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75\") " Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.605000 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities" (OuterVolumeSpecName: "utilities") pod "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" (UID: "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.616429 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc" (OuterVolumeSpecName: "kube-api-access-svvtc") pod "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" (UID: "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75"). InnerVolumeSpecName "kube-api-access-svvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.650920 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" (UID: "90d5a4ee-79c6-4b66-87ff-9e0d0571ec75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.705087 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvtc\" (UniqueName: \"kubernetes.io/projected/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-kube-api-access-svvtc\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.705117 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.705127 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.873996 4792 generic.go:334] "Generic (PLEG): container finished" podID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerID="188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64" exitCode=0 Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.874042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerDied","Data":"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64"} Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.874068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdwkf" event={"ID":"90d5a4ee-79c6-4b66-87ff-9e0d0571ec75","Type":"ContainerDied","Data":"ef58dfb7316bd06be818b09217aa91b33174f685912e77bf935b554de1226a5f"} Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.874089 4792 scope.go:117] "RemoveContainer" containerID="188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.874213 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdwkf" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.906813 4792 scope.go:117] "RemoveContainer" containerID="5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd" Feb 16 23:07:56 crc kubenswrapper[4792]: E0216 23:07:56.923566 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d5a4ee_79c6_4b66_87ff_9e0d0571ec75.slice\": RecentStats: unable to find data in memory cache]" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.930838 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.940411 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdwkf"] Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.950683 4792 scope.go:117] "RemoveContainer" containerID="b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.990714 4792 scope.go:117] "RemoveContainer" containerID="188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64" Feb 16 23:07:56 crc kubenswrapper[4792]: E0216 23:07:56.991232 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64\": container with ID starting with 188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64 not found: ID does not exist" containerID="188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.991286 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64"} err="failed to get container status \"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64\": rpc error: code = NotFound desc = could not find container \"188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64\": container with ID starting with 188a13e89da8aff778006ede624a3066716abca93adcadb6c271264191a8be64 not found: ID does not exist" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.991313 4792 scope.go:117] "RemoveContainer" containerID="5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd" Feb 16 23:07:56 crc kubenswrapper[4792]: E0216 23:07:56.991638 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd\": container with ID starting with 5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd not found: ID does not exist" containerID="5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.991681 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd"} err="failed to get container status \"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd\": rpc error: code = NotFound desc = could not find container \"5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd\": container with ID starting with 5b37d2b8662c833dba80caae85b8690775701645f7a52ca1ec0a2a69a664f3bd not found: ID does not exist" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.991726 4792 scope.go:117] "RemoveContainer" containerID="b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85" Feb 16 23:07:56 crc kubenswrapper[4792]: E0216 23:07:56.991984 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85\": container with ID starting with b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85 not found: ID does not exist" containerID="b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85" Feb 16 23:07:56 crc kubenswrapper[4792]: I0216 23:07:56.992003 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85"} err="failed to get container status \"b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85\": rpc error: code = NotFound desc = could not find container \"b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85\": container with ID starting with b2d5831af78805c85fd66a6021634935b96930386ed49f8a8d6947be036dcc85 not found: ID does not exist" Feb 16 23:07:58 crc kubenswrapper[4792]: I0216 23:07:58.037966 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" path="/var/lib/kubelet/pods/90d5a4ee-79c6-4b66-87ff-9e0d0571ec75/volumes" Feb 16 23:08:04 crc kubenswrapper[4792]: E0216 23:08:04.028311 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.239265 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-7bglt_e8d6dc28-8ec7-4d64-9868-673d3ea42873/cluster-logging-operator/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.400304 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-9nkvn_e3e938a2-8839-497e-ba02-7d1f5e2a1998/collector/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.511620 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_732fff3b-fe1d-4e49-96da-e18db7ce5e9b/loki-compactor/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.585147 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-x5pvq_6c9676d6-4914-442f-b206-68319ef59156/loki-distributor/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.703394 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f68b45f-f5k5x_e4cfe4c6-e37d-4507-9bed-c2f13c0978ff/gateway/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.722295 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f68b45f-f5k5x_e4cfe4c6-e37d-4507-9bed-c2f13c0978ff/opa/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.859796 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f68b45f-p8dz5_89876142-9620-43ca-bc5e-d0615a643826/gateway/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.884420 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f68b45f-p8dz5_89876142-9620-43ca-bc5e-d0615a643826/opa/0.log" Feb 16 23:08:04 crc kubenswrapper[4792]: I0216 23:08:04.979787 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_e322f3d3-92f8-4b24-88ea-a2189fc9c7fb/loki-index-gateway/0.log" Feb 16 23:08:05 crc kubenswrapper[4792]: I0216 23:08:05.159323 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_4857850b-9fec-45a6-8c45-9d13153372cf/loki-ingester/0.log" Feb 16 23:08:05 crc kubenswrapper[4792]: I0216 23:08:05.301811 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-696l8_1b78e491-c2b1-4381-b1df-4e53af021942/loki-querier/0.log" Feb 16 23:08:05 crc kubenswrapper[4792]: I0216 23:08:05.328666 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-wks44_3e0a446f-fcc8-40b8-81bc-fc80c8764582/loki-query-frontend/0.log" Feb 16 23:08:08 crc kubenswrapper[4792]: E0216 23:08:08.036284 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:08:19 crc kubenswrapper[4792]: E0216 23:08:19.029570 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:08:19 crc kubenswrapper[4792]: E0216 23:08:19.029854 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.452635 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-pst5t_2c5546f7-52f2-453d-8979-ce4ccd26c165/kube-rbac-proxy/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.574745 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-pst5t_2c5546f7-52f2-453d-8979-ce4ccd26c165/controller/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.673814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-frr-files/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.850165 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-metrics/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.864298 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-frr-files/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.885005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-reloader/0.log" Feb 16 23:08:21 crc kubenswrapper[4792]: I0216 23:08:21.890403 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-reloader/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.062209 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-reloader/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.062331 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-frr-files/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.087362 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-metrics/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.100755 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-metrics/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.268248 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-metrics/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.279523 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-frr-files/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.286205 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/controller/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.287560 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/cp-reloader/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.440784 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/kube-rbac-proxy/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.450005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/frr-metrics/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.490536 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/kube-rbac-proxy-frr/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.655931 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/reloader/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.758816 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-zkb5q_f12e75d7-4541-4024-b589-eb6cd86c6d18/frr-k8s-webhook-server/0.log" Feb 16 23:08:22 crc kubenswrapper[4792]: I0216 23:08:22.918816 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b99cc5488-gwb5q_a4687638-a268-4abd-afdd-3c7d7b257113/manager/0.log" Feb 16 23:08:23 crc kubenswrapper[4792]: I0216 23:08:23.039315 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-678488bb86-zks4j_2a4c6ce4-d81d-460a-a14e-0701afe8957f/webhook-server/0.log" Feb 16 23:08:23 crc kubenswrapper[4792]: I0216 23:08:23.280669 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8bvkf_f8a21d7f-64c4-4182-9950-4ab70399f312/kube-rbac-proxy/0.log" Feb 16 23:08:23 crc kubenswrapper[4792]: I0216 23:08:23.815439 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8bvkf_f8a21d7f-64c4-4182-9950-4ab70399f312/speaker/0.log" Feb 16 23:08:24 crc kubenswrapper[4792]: I0216 23:08:24.317744 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s7hh8_de99e45c-01de-43eb-84bb-a601f9242155/frr/0.log" Feb 16 23:08:30 crc kubenswrapper[4792]: E0216 23:08:30.029276 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:08:31 crc kubenswrapper[4792]: E0216 23:08:31.028572 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:08:31 crc kubenswrapper[4792]: I0216 23:08:31.532352 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:08:31 crc kubenswrapper[4792]: I0216 23:08:31.532709 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:08:37 crc kubenswrapper[4792]: I0216 23:08:37.996075 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/util/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.223186 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/util/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.246809 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/pull/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.279460 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/pull/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.448288 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/util/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.452723 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/pull/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.459009 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e194gj5m_a4af572d-5db9-4583-b0be-58556116679c/extract/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.629755 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/util/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.827282 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/util/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.855937 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/pull/0.log" Feb 16 23:08:38 crc kubenswrapper[4792]: I0216 23:08:38.879900 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/pull/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.036146 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/util/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.037849 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/pull/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.038554 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084h2bq_cfb5cd53-4f38-4b74-98ba-d9e0107fef18/extract/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.225184 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/util/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.426802 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/pull/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.447805 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/util/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.457393 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/pull/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.587060 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/util/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.656508 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/pull/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.674137 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213z8vrm_6f4d19e1-687e-44c3-928f-bda7f0b893f9/extract/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.754608 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-utilities/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.955024 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-content/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.959913 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-utilities/0.log" Feb 16 23:08:39 crc kubenswrapper[4792]: I0216 23:08:39.968160 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-content/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.173270 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-content/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.251515 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/extract-utilities/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.438633 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-utilities/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.681884 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-utilities/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.708495 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-content/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.709649 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-content/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.908288 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-utilities/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.912861 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmzts_7cb484ab-fa97-4c10-a78e-20a51ec6618b/registry-server/0.log" Feb 16 23:08:40 crc kubenswrapper[4792]: I0216 23:08:40.918365 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/extract-content/0.log" Feb 16 23:08:41 crc kubenswrapper[4792]: E0216 23:08:41.028558 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:08:41 crc kubenswrapper[4792]: I0216 23:08:41.841988 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4q8b7_ec2413fb-5b9f-49a0-8451-8d1bc7e9c1b1/registry-server/0.log" Feb 16 23:08:41 crc kubenswrapper[4792]: I0216 23:08:41.931568 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/util/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.079308 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/util/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.080117 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/pull/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.104071 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/pull/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.444929 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/extract/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.445845 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/util/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.457993 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tdsxd_b0512fe0-f5a1-4558-a562-30ad7a59856c/pull/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.538145 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/util/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.672865 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/util/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.719438 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/pull/0.log" Feb 16 23:08:42 crc kubenswrapper[4792]: I0216 23:08:42.751983 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/pull/0.log" Feb 16 23:08:43 crc kubenswrapper[4792]: I0216 23:08:43.626355 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/util/0.log" Feb 16 23:08:43 crc kubenswrapper[4792]: I0216 23:08:43.660192 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-m6k42_0847734c-681b-4f22-af87-370debd04712/marketplace-operator/0.log" Feb 16 23:08:43 crc kubenswrapper[4792]: I0216 23:08:43.673680 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/pull/0.log" Feb 16 23:08:43 crc kubenswrapper[4792]: I0216 23:08:43.674229 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca27zwl_7378542c-ef2c-46ad-af40-8f08005d9537/extract/0.log" Feb 16 23:08:43 crc kubenswrapper[4792]: I0216 23:08:43.890112 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.075799 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.078066 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-content/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.081324 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-content/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.300821 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-content/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.312555 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.421268 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.543397 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.563258 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pblwf_d7baac81-f46f-4e76-9333-95dcdc915c42/registry-server/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.623394 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-content/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.643573 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-content/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.812990 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-utilities/0.log" Feb 16 23:08:44 crc kubenswrapper[4792]: I0216 23:08:44.839171 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/extract-content/0.log" Feb 16 23:08:45 crc kubenswrapper[4792]: I0216 23:08:45.562245 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9xfg_da72596c-78d5-40d7-99b1-282bb5bdeb6e/registry-server/0.log" Feb 16 23:08:46 crc kubenswrapper[4792]: E0216 23:08:46.029374 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:08:54 crc kubenswrapper[4792]: E0216 23:08:54.029785 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:09:00 crc kubenswrapper[4792]: E0216 23:09:00.028945 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:09:01 crc kubenswrapper[4792]: I0216 23:09:01.533036 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:09:01 crc kubenswrapper[4792]: I0216 23:09:01.533474 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:09:01 crc kubenswrapper[4792]: I0216 23:09:01.805962 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-785cg_cc1404e2-49f6-48df-99fc-24b7b05b5e33/prometheus-operator/0.log" Feb 16 23:09:01 crc kubenswrapper[4792]: I0216 23:09:01.842649 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6887ccdc77-tsb6v_2899a7e8-f5fa-4879-9df7-ba57ae9f4262/prometheus-operator-admission-webhook/0.log" Feb 16 23:09:01 crc kubenswrapper[4792]: I0216 23:09:01.860905 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6887ccdc77-4kmkg_e173d96c-280b-4293-ae21-272cce1b11bc/prometheus-operator-admission-webhook/0.log" Feb 16 23:09:02 crc kubenswrapper[4792]: I0216 23:09:02.023088 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7sqrb_85d29954-608f-4bb5-805e-5ac6d45b6652/operator/0.log" Feb 16 23:09:02 crc kubenswrapper[4792]: I0216 23:09:02.053612 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7jr7l_f912b10c-80d1-4667-b807-45a54e626fbe/perses-operator/0.log" Feb 16 23:09:02 crc kubenswrapper[4792]: I0216 23:09:02.070199 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8nqmm_6dea83c6-c1d5-4b8e-a70c-3184a366721a/observability-ui-dashboards/0.log" Feb 16 23:09:05 crc kubenswrapper[4792]: E0216 23:09:05.030202 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:09:14 crc kubenswrapper[4792]: E0216 23:09:14.028669 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:09:17 crc kubenswrapper[4792]: E0216 23:09:17.029567 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:09:18 crc kubenswrapper[4792]: I0216 23:09:18.355354 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c9d97fb5-j4f5p_e2d0a7d0-53d6-4031-894c-734f67974527/manager/0.log" Feb 16 23:09:18 crc kubenswrapper[4792]: I0216 23:09:18.388640 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6c9d97fb5-j4f5p_e2d0a7d0-53d6-4031-894c-734f67974527/kube-rbac-proxy/0.log" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.678455 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:21 crc kubenswrapper[4792]: E0216 23:09:21.679998 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="registry-server" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.680021 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="registry-server" Feb 16 23:09:21 crc kubenswrapper[4792]: E0216 23:09:21.680056 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="extract-utilities" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.680067 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="extract-utilities" Feb 16 23:09:21 crc kubenswrapper[4792]: E0216 23:09:21.680107 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="extract-content" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.680118 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="extract-content" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.680464 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d5a4ee-79c6-4b66-87ff-9e0d0571ec75" containerName="registry-server" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.683211 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.702343 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.840890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.840991 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.841085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4s5\" (UniqueName: \"kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.943076 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.943456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.943556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf4s5\" (UniqueName: \"kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.943877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.944031 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:21 crc kubenswrapper[4792]: I0216 23:09:21.968064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf4s5\" (UniqueName: \"kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5\") pod \"community-operators-mz52s\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:22 crc kubenswrapper[4792]: I0216 23:09:22.016865 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:22 crc kubenswrapper[4792]: I0216 23:09:22.663508 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:22 crc kubenswrapper[4792]: I0216 23:09:22.781526 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerStarted","Data":"db1ded25423b98b5e921bafc726ee12b4bcbe9702a6232aad6cad0a2d3dd1322"} Feb 16 23:09:23 crc kubenswrapper[4792]: I0216 23:09:23.795726 4792 generic.go:334] "Generic (PLEG): container finished" podID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerID="91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701" exitCode=0 Feb 16 23:09:23 crc kubenswrapper[4792]: I0216 23:09:23.795791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerDied","Data":"91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701"} Feb 16 23:09:25 crc kubenswrapper[4792]: I0216 23:09:25.818220 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerStarted","Data":"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0"} Feb 16 23:09:26 crc kubenswrapper[4792]: I0216 23:09:26.829997 4792 generic.go:334] "Generic (PLEG): container finished" podID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerID="9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0" exitCode=0 Feb 16 23:09:26 crc kubenswrapper[4792]: I0216 23:09:26.830075 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerDied","Data":"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0"} Feb 16 23:09:27 crc kubenswrapper[4792]: E0216 23:09:27.027726 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:09:27 crc kubenswrapper[4792]: I0216 23:09:27.843963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerStarted","Data":"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e"} Feb 16 23:09:30 crc kubenswrapper[4792]: E0216 23:09:30.036562 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.532239 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.532860 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.532910 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.533913 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.533995 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a" gracePeriod=600 Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.893040 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a" exitCode=0 Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.893374 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a"} Feb 16 23:09:31 crc kubenswrapper[4792]: I0216 23:09:31.893407 4792 scope.go:117] "RemoveContainer" containerID="3e177b2276f82dc29a3587048660119a7e7b095f001f6e3ba0b11d2b86cee4a0" Feb 16 23:09:32 crc kubenswrapper[4792]: I0216 23:09:32.017036 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:32 crc kubenswrapper[4792]: I0216 23:09:32.017101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:32 crc kubenswrapper[4792]: I0216 23:09:32.904968 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerStarted","Data":"697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe"} Feb 16 23:09:32 crc kubenswrapper[4792]: I0216 23:09:32.940438 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mz52s" podStartSLOduration=8.345510729 podStartE2EDuration="11.940412733s" podCreationTimestamp="2026-02-16 23:09:21 +0000 UTC" firstStartedPulling="2026-02-16 23:09:23.797779697 +0000 UTC m=+5496.451058588" lastFinishedPulling="2026-02-16 23:09:27.392681701 +0000 UTC m=+5500.045960592" observedRunningTime="2026-02-16 23:09:27.863839793 +0000 UTC m=+5500.517118694" watchObservedRunningTime="2026-02-16 23:09:32.940412733 +0000 UTC m=+5505.593691624" Feb 16 23:09:33 crc kubenswrapper[4792]: I0216 23:09:33.077137 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mz52s" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="registry-server" probeResult="failure" output=< Feb 16 23:09:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 23:09:33 crc kubenswrapper[4792]: > Feb 16 23:09:40 crc kubenswrapper[4792]: E0216 23:09:40.028528 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:09:41 crc kubenswrapper[4792]: E0216 23:09:41.028021 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:09:42 crc kubenswrapper[4792]: I0216 23:09:42.105017 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:42 crc kubenswrapper[4792]: I0216 23:09:42.177030 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:42 crc kubenswrapper[4792]: I0216 23:09:42.358305 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.029998 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mz52s" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="registry-server" containerID="cri-o://b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e" gracePeriod=2 Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.662305 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.741913 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf4s5\" (UniqueName: \"kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5\") pod \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.742169 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content\") pod \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.742199 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities\") pod \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\" (UID: \"7387e4e6-92ea-4897-a22b-f5d9a1f70807\") " Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.743124 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities" (OuterVolumeSpecName: "utilities") pod "7387e4e6-92ea-4897-a22b-f5d9a1f70807" (UID: "7387e4e6-92ea-4897-a22b-f5d9a1f70807"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.749824 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5" (OuterVolumeSpecName: "kube-api-access-vf4s5") pod "7387e4e6-92ea-4897-a22b-f5d9a1f70807" (UID: "7387e4e6-92ea-4897-a22b-f5d9a1f70807"). InnerVolumeSpecName "kube-api-access-vf4s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.836507 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7387e4e6-92ea-4897-a22b-f5d9a1f70807" (UID: "7387e4e6-92ea-4897-a22b-f5d9a1f70807"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.846189 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.846227 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7387e4e6-92ea-4897-a22b-f5d9a1f70807-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:44 crc kubenswrapper[4792]: I0216 23:09:44.846240 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf4s5\" (UniqueName: \"kubernetes.io/projected/7387e4e6-92ea-4897-a22b-f5d9a1f70807-kube-api-access-vf4s5\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.041700 4792 generic.go:334] "Generic (PLEG): container finished" podID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerID="b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e" exitCode=0 Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.041746 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerDied","Data":"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e"} Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.041781 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz52s" event={"ID":"7387e4e6-92ea-4897-a22b-f5d9a1f70807","Type":"ContainerDied","Data":"db1ded25423b98b5e921bafc726ee12b4bcbe9702a6232aad6cad0a2d3dd1322"} Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.041801 4792 scope.go:117] "RemoveContainer" containerID="b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.041752 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz52s" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.064002 4792 scope.go:117] "RemoveContainer" containerID="9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.090927 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.095146 4792 scope.go:117] "RemoveContainer" containerID="91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.111290 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mz52s"] Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.138027 4792 scope.go:117] "RemoveContainer" containerID="b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e" Feb 16 23:09:45 crc kubenswrapper[4792]: E0216 23:09:45.138725 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e\": container with ID starting with b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e not found: ID does not exist" containerID="b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.138774 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e"} err="failed to get container status \"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e\": rpc error: code = NotFound desc = could not find container \"b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e\": container with ID starting with b8bc0598752f0a906ffc40967bb91dd288b959202579200aef25dd3effc6f05e not found: ID does not exist" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.138801 4792 scope.go:117] "RemoveContainer" containerID="9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0" Feb 16 23:09:45 crc kubenswrapper[4792]: E0216 23:09:45.139196 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0\": container with ID starting with 9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0 not found: ID does not exist" containerID="9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.139231 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0"} err="failed to get container status \"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0\": rpc error: code = NotFound desc = could not find container \"9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0\": container with ID starting with 9d623b74f040f76d1d6747ae166c15ee038ebdaad6792fea43cc15037892fce0 not found: ID does not exist" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.139247 4792 scope.go:117] "RemoveContainer" containerID="91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701" Feb 16 23:09:45 crc kubenswrapper[4792]: E0216 23:09:45.139531 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701\": container with ID starting with 91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701 not found: ID does not exist" containerID="91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701" Feb 16 23:09:45 crc kubenswrapper[4792]: I0216 23:09:45.139555 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701"} err="failed to get container status \"91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701\": rpc error: code = NotFound desc = could not find container \"91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701\": container with ID starting with 91b3df1c8f5553827e6ef177abe5a34b795c4496a21296d6ada26ee7e9031701 not found: ID does not exist" Feb 16 23:09:46 crc kubenswrapper[4792]: I0216 23:09:46.038514 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" path="/var/lib/kubelet/pods/7387e4e6-92ea-4897-a22b-f5d9a1f70807/volumes" Feb 16 23:09:52 crc kubenswrapper[4792]: E0216 23:09:52.030223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:09:53 crc kubenswrapper[4792]: E0216 23:09:53.028953 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:10:02 crc kubenswrapper[4792]: I0216 23:10:02.677576 4792 scope.go:117] "RemoveContainer" containerID="5d8ab09807b29e622f2074777fff97dd12a2466b1bf2ccf19b4baf7ef795449b" Feb 16 23:10:04 crc kubenswrapper[4792]: I0216 23:10:04.032474 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:10:04 crc kubenswrapper[4792]: E0216 23:10:04.162219 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 23:10:04 crc kubenswrapper[4792]: E0216 23:10:04.162470 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 23:10:04 crc kubenswrapper[4792]: E0216 23:10:04.162663 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxv4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jndsb_openstack(c7d886e6-27ad-48f2-a820-76ae43892a4f): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 23:10:04 crc kubenswrapper[4792]: E0216 23:10:04.164763 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:10:08 crc kubenswrapper[4792]: E0216 23:10:08.043184 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:10:15 crc kubenswrapper[4792]: E0216 23:10:15.029492 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:10:21 crc kubenswrapper[4792]: E0216 23:10:21.131793 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:10:21 crc kubenswrapper[4792]: E0216 23:10:21.132852 4792 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 23:10:21 crc kubenswrapper[4792]: E0216 23:10:21.133142 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h699h664hddh555hb7h659hd5h66dh565h5c5h567h555hbh54ch85h5b9h698hdfh65dh76h54fhc8h567h66bh5bbh68fh58dh84h57bhbchb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e58723ee-d9c2-4b71-b072-3cf7b2a26c12): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 23:10:21 crc kubenswrapper[4792]: E0216 23:10:21.134449 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:10:29 crc kubenswrapper[4792]: E0216 23:10:29.029884 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:10:35 crc kubenswrapper[4792]: E0216 23:10:35.029125 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:10:44 crc kubenswrapper[4792]: E0216 23:10:44.028293 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:10:51 crc kubenswrapper[4792]: E0216 23:10:51.030583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:10:58 crc kubenswrapper[4792]: E0216 23:10:58.038392 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:11:04 crc kubenswrapper[4792]: E0216 23:11:04.031675 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:11:11 crc kubenswrapper[4792]: E0216 23:11:11.030410 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:11:12 crc kubenswrapper[4792]: I0216 23:11:12.098835 4792 generic.go:334] "Generic (PLEG): container finished" podID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerID="9a8cf0d6f221fc2970f86625c8e1be47e9ee05ec002fb25d23f490965c276a94" exitCode=0 Feb 16 23:11:12 crc kubenswrapper[4792]: I0216 23:11:12.098913 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" event={"ID":"6878f63d-35aa-4e64-b246-f3b6395d0383","Type":"ContainerDied","Data":"9a8cf0d6f221fc2970f86625c8e1be47e9ee05ec002fb25d23f490965c276a94"} Feb 16 23:11:12 crc kubenswrapper[4792]: I0216 23:11:12.099973 4792 scope.go:117] "RemoveContainer" containerID="9a8cf0d6f221fc2970f86625c8e1be47e9ee05ec002fb25d23f490965c276a94" Feb 16 23:11:12 crc kubenswrapper[4792]: I0216 23:11:12.247024 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcr5s_must-gather-8ttm4_6878f63d-35aa-4e64-b246-f3b6395d0383/gather/0.log" Feb 16 23:11:16 crc kubenswrapper[4792]: E0216 23:11:16.030403 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:11:20 crc kubenswrapper[4792]: I0216 23:11:20.528804 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcr5s/must-gather-8ttm4"] Feb 16 23:11:20 crc kubenswrapper[4792]: I0216 23:11:20.529612 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="copy" containerID="cri-o://a9271518297ad0b26e37a2f6200f9e4a4a10064a9ef519ed19c9bc2513205c6e" gracePeriod=2 Feb 16 23:11:20 crc kubenswrapper[4792]: I0216 23:11:20.538905 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcr5s/must-gather-8ttm4"] Feb 16 23:11:20 crc kubenswrapper[4792]: I0216 23:11:20.818992 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcr5s_must-gather-8ttm4_6878f63d-35aa-4e64-b246-f3b6395d0383/copy/0.log" Feb 16 23:11:20 crc kubenswrapper[4792]: I0216 23:11:20.820112 4792 generic.go:334] "Generic (PLEG): container finished" podID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerID="a9271518297ad0b26e37a2f6200f9e4a4a10064a9ef519ed19c9bc2513205c6e" exitCode=143 Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.049307 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcr5s_must-gather-8ttm4_6878f63d-35aa-4e64-b246-f3b6395d0383/copy/0.log" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.049750 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.151663 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lgm9\" (UniqueName: \"kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9\") pod \"6878f63d-35aa-4e64-b246-f3b6395d0383\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.151741 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output\") pod \"6878f63d-35aa-4e64-b246-f3b6395d0383\" (UID: \"6878f63d-35aa-4e64-b246-f3b6395d0383\") " Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.171732 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9" (OuterVolumeSpecName: "kube-api-access-7lgm9") pod "6878f63d-35aa-4e64-b246-f3b6395d0383" (UID: "6878f63d-35aa-4e64-b246-f3b6395d0383"). InnerVolumeSpecName "kube-api-access-7lgm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.256558 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lgm9\" (UniqueName: \"kubernetes.io/projected/6878f63d-35aa-4e64-b246-f3b6395d0383-kube-api-access-7lgm9\") on node \"crc\" DevicePath \"\"" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.347919 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6878f63d-35aa-4e64-b246-f3b6395d0383" (UID: "6878f63d-35aa-4e64-b246-f3b6395d0383"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.360903 4792 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6878f63d-35aa-4e64-b246-f3b6395d0383-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.830590 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcr5s_must-gather-8ttm4_6878f63d-35aa-4e64-b246-f3b6395d0383/copy/0.log" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.832068 4792 scope.go:117] "RemoveContainer" containerID="a9271518297ad0b26e37a2f6200f9e4a4a10064a9ef519ed19c9bc2513205c6e" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.832273 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcr5s/must-gather-8ttm4" Feb 16 23:11:21 crc kubenswrapper[4792]: I0216 23:11:21.855249 4792 scope.go:117] "RemoveContainer" containerID="9a8cf0d6f221fc2970f86625c8e1be47e9ee05ec002fb25d23f490965c276a94" Feb 16 23:11:22 crc kubenswrapper[4792]: I0216 23:11:22.040279 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" path="/var/lib/kubelet/pods/6878f63d-35aa-4e64-b246-f3b6395d0383/volumes" Feb 16 23:11:23 crc kubenswrapper[4792]: E0216 23:11:23.028573 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:11:28 crc kubenswrapper[4792]: E0216 23:11:28.038042 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:11:31 crc kubenswrapper[4792]: I0216 23:11:31.532961 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:11:31 crc kubenswrapper[4792]: I0216 23:11:31.533568 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:11:34 crc kubenswrapper[4792]: E0216 23:11:34.028981 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:11:40 crc kubenswrapper[4792]: E0216 23:11:40.031033 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:11:45 crc kubenswrapper[4792]: E0216 23:11:45.031774 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:11:52 crc kubenswrapper[4792]: E0216 23:11:52.029675 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:11:58 crc kubenswrapper[4792]: E0216 23:11:58.045490 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:12:01 crc kubenswrapper[4792]: I0216 23:12:01.532532 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:12:01 crc kubenswrapper[4792]: I0216 23:12:01.533146 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:12:07 crc kubenswrapper[4792]: E0216 23:12:07.030476 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:12:11 crc kubenswrapper[4792]: E0216 23:12:11.030124 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:12:20 crc kubenswrapper[4792]: E0216 23:12:20.031007 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:12:23 crc kubenswrapper[4792]: E0216 23:12:23.029456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.531991 4792 patch_prober.go:28] interesting pod/machine-config-daemon-szmc4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.532503 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.532541 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.533394 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe"} pod="openshift-machine-config-operator/machine-config-daemon-szmc4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.533448 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerName="machine-config-daemon" containerID="cri-o://697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" gracePeriod=600 Feb 16 23:12:31 crc kubenswrapper[4792]: E0216 23:12:31.664647 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.677913 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" exitCode=0 Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.678005 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" event={"ID":"5f759c59-befa-4d12-ab4b-c4e579fba2bd","Type":"ContainerDied","Data":"697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe"} Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.678080 4792 scope.go:117] "RemoveContainer" containerID="a1bcc60a02d6dacb739d194be7985081f85b78d3c9ae25cd3f32b785cc1d079a" Feb 16 23:12:31 crc kubenswrapper[4792]: I0216 23:12:31.679361 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:12:31 crc kubenswrapper[4792]: E0216 23:12:31.679851 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:12:32 crc kubenswrapper[4792]: E0216 23:12:32.030533 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:12:35 crc kubenswrapper[4792]: E0216 23:12:35.029296 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:12:47 crc kubenswrapper[4792]: I0216 23:12:47.026881 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:12:47 crc kubenswrapper[4792]: E0216 23:12:47.028119 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:12:47 crc kubenswrapper[4792]: E0216 23:12:47.028851 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:12:50 crc kubenswrapper[4792]: E0216 23:12:50.028827 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:12:58 crc kubenswrapper[4792]: E0216 23:12:58.036633 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:13:01 crc kubenswrapper[4792]: I0216 23:13:01.026454 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:13:01 crc kubenswrapper[4792]: E0216 23:13:01.027446 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:13:04 crc kubenswrapper[4792]: E0216 23:13:04.028995 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:13:09 crc kubenswrapper[4792]: E0216 23:13:09.029348 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:13:13 crc kubenswrapper[4792]: I0216 23:13:13.026098 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:13:13 crc kubenswrapper[4792]: E0216 23:13:13.026706 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:13:15 crc kubenswrapper[4792]: E0216 23:13:15.031772 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:13:24 crc kubenswrapper[4792]: E0216 23:13:24.028711 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:13:25 crc kubenswrapper[4792]: I0216 23:13:25.026781 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:13:25 crc kubenswrapper[4792]: E0216 23:13:25.027556 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:13:30 crc kubenswrapper[4792]: E0216 23:13:30.029266 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:13:36 crc kubenswrapper[4792]: E0216 23:13:36.027846 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:13:38 crc kubenswrapper[4792]: I0216 23:13:38.036007 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:13:38 crc kubenswrapper[4792]: E0216 23:13:38.036466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:13:44 crc kubenswrapper[4792]: E0216 23:13:44.029544 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.443328 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:13:45 crc kubenswrapper[4792]: E0216 23:13:45.444267 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="extract-content" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444283 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="extract-content" Feb 16 23:13:45 crc kubenswrapper[4792]: E0216 23:13:45.444302 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="registry-server" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444310 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="registry-server" Feb 16 23:13:45 crc kubenswrapper[4792]: E0216 23:13:45.444327 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="extract-utilities" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444337 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="extract-utilities" Feb 16 23:13:45 crc kubenswrapper[4792]: E0216 23:13:45.444348 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="copy" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444355 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="copy" Feb 16 23:13:45 crc kubenswrapper[4792]: E0216 23:13:45.444378 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="gather" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444384 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="gather" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444592 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="gather" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444631 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6878f63d-35aa-4e64-b246-f3b6395d0383" containerName="copy" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.444654 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387e4e6-92ea-4897-a22b-f5d9a1f70807" containerName="registry-server" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.446219 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.474702 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.573657 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ngx\" (UniqueName: \"kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.573743 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.573936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.676509 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.676710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ngx\" (UniqueName: \"kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.676789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.677105 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.677213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.717365 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ngx\" (UniqueName: \"kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx\") pod \"redhat-operators-lq286\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:45 crc kubenswrapper[4792]: I0216 23:13:45.768945 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:46 crc kubenswrapper[4792]: I0216 23:13:46.351528 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:13:46 crc kubenswrapper[4792]: I0216 23:13:46.572208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerStarted","Data":"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029"} Feb 16 23:13:46 crc kubenswrapper[4792]: I0216 23:13:46.572418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerStarted","Data":"fb8673bf3305bffed2e8e9dd43dbf54a2aaa6a43c7157734cd27a0dd20fbfea6"} Feb 16 23:13:47 crc kubenswrapper[4792]: E0216 23:13:47.027208 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:13:47 crc kubenswrapper[4792]: I0216 23:13:47.583005 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerID="e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029" exitCode=0 Feb 16 23:13:47 crc kubenswrapper[4792]: I0216 23:13:47.583073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerDied","Data":"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029"} Feb 16 23:13:48 crc kubenswrapper[4792]: I0216 23:13:48.597643 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerStarted","Data":"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868"} Feb 16 23:13:50 crc kubenswrapper[4792]: I0216 23:13:50.027097 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:13:50 crc kubenswrapper[4792]: E0216 23:13:50.027966 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:13:53 crc kubenswrapper[4792]: I0216 23:13:53.661284 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerID="59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868" exitCode=0 Feb 16 23:13:53 crc kubenswrapper[4792]: I0216 23:13:53.661411 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerDied","Data":"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868"} Feb 16 23:13:54 crc kubenswrapper[4792]: I0216 23:13:54.674052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerStarted","Data":"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9"} Feb 16 23:13:54 crc kubenswrapper[4792]: I0216 23:13:54.704633 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lq286" podStartSLOduration=3.180076518 podStartE2EDuration="9.704577117s" podCreationTimestamp="2026-02-16 23:13:45 +0000 UTC" firstStartedPulling="2026-02-16 23:13:47.58488541 +0000 UTC m=+5760.238164301" lastFinishedPulling="2026-02-16 23:13:54.109385979 +0000 UTC m=+5766.762664900" observedRunningTime="2026-02-16 23:13:54.698231787 +0000 UTC m=+5767.351510678" watchObservedRunningTime="2026-02-16 23:13:54.704577117 +0000 UTC m=+5767.357856048" Feb 16 23:13:55 crc kubenswrapper[4792]: I0216 23:13:55.769624 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:55 crc kubenswrapper[4792]: I0216 23:13:55.769960 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:13:57 crc kubenswrapper[4792]: E0216 23:13:57.029550 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:13:57 crc kubenswrapper[4792]: I0216 23:13:57.072137 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lq286" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" probeResult="failure" output=< Feb 16 23:13:57 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 23:13:57 crc kubenswrapper[4792]: > Feb 16 23:14:01 crc kubenswrapper[4792]: I0216 23:14:01.026709 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:14:01 crc kubenswrapper[4792]: E0216 23:14:01.027985 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:14:01 crc kubenswrapper[4792]: E0216 23:14:01.030450 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:14:06 crc kubenswrapper[4792]: I0216 23:14:06.841733 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lq286" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" probeResult="failure" output=< Feb 16 23:14:06 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Feb 16 23:14:06 crc kubenswrapper[4792]: > Feb 16 23:14:08 crc kubenswrapper[4792]: E0216 23:14:08.037963 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:14:14 crc kubenswrapper[4792]: I0216 23:14:14.026816 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:14:14 crc kubenswrapper[4792]: E0216 23:14:14.028625 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:14:15 crc kubenswrapper[4792]: E0216 23:14:15.030505 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:14:15 crc kubenswrapper[4792]: I0216 23:14:15.821281 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:14:15 crc kubenswrapper[4792]: I0216 23:14:15.877087 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:14:16 crc kubenswrapper[4792]: I0216 23:14:16.661140 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:14:16 crc kubenswrapper[4792]: I0216 23:14:16.940860 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lq286" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" containerID="cri-o://26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9" gracePeriod=2 Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.595446 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.697399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content\") pod \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.697954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities\") pod \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.698116 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ngx\" (UniqueName: \"kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx\") pod \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\" (UID: \"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2\") " Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.699442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities" (OuterVolumeSpecName: "utilities") pod "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" (UID: "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.703280 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx" (OuterVolumeSpecName: "kube-api-access-z8ngx") pod "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" (UID: "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2"). InnerVolumeSpecName "kube-api-access-z8ngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.802150 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.802200 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ngx\" (UniqueName: \"kubernetes.io/projected/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-kube-api-access-z8ngx\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.848667 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" (UID: "3c02d6ea-22e8-43ef-ad77-a07fd32d96c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.904247 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.956088 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerID="26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9" exitCode=0 Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.956139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerDied","Data":"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9"} Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.956173 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq286" event={"ID":"3c02d6ea-22e8-43ef-ad77-a07fd32d96c2","Type":"ContainerDied","Data":"fb8673bf3305bffed2e8e9dd43dbf54a2aaa6a43c7157734cd27a0dd20fbfea6"} Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.956186 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq286" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.956194 4792 scope.go:117] "RemoveContainer" containerID="26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.994577 4792 scope.go:117] "RemoveContainer" containerID="59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868" Feb 16 23:14:17 crc kubenswrapper[4792]: I0216 23:14:17.999747 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.009089 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lq286"] Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.026656 4792 scope.go:117] "RemoveContainer" containerID="e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.044441 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" path="/var/lib/kubelet/pods/3c02d6ea-22e8-43ef-ad77-a07fd32d96c2/volumes" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.072375 4792 scope.go:117] "RemoveContainer" containerID="26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9" Feb 16 23:14:18 crc kubenswrapper[4792]: E0216 23:14:18.072768 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9\": container with ID starting with 26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9 not found: ID does not exist" containerID="26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.072808 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9"} err="failed to get container status \"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9\": rpc error: code = NotFound desc = could not find container \"26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9\": container with ID starting with 26789fa7e260e3d13d80b33a57aaa7aa35957e0aa8c23ebb6b37e6d100d8c3a9 not found: ID does not exist" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.072837 4792 scope.go:117] "RemoveContainer" containerID="59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868" Feb 16 23:14:18 crc kubenswrapper[4792]: E0216 23:14:18.073167 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868\": container with ID starting with 59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868 not found: ID does not exist" containerID="59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.073195 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868"} err="failed to get container status \"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868\": rpc error: code = NotFound desc = could not find container \"59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868\": container with ID starting with 59ff0ac8376c1ef1d6f9ae526c6bc04d40d37ab5a6a3a9541e54e85bd7a9e868 not found: ID does not exist" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.073211 4792 scope.go:117] "RemoveContainer" containerID="e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029" Feb 16 23:14:18 crc kubenswrapper[4792]: E0216 23:14:18.073484 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029\": container with ID starting with e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029 not found: ID does not exist" containerID="e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029" Feb 16 23:14:18 crc kubenswrapper[4792]: I0216 23:14:18.073511 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029"} err="failed to get container status \"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029\": rpc error: code = NotFound desc = could not find container \"e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029\": container with ID starting with e6e30ce9fd5608d45a3dd300996b564051e737fd012c64319c8e4ed5d8747029 not found: ID does not exist" Feb 16 23:14:23 crc kubenswrapper[4792]: E0216 23:14:23.029114 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:14:25 crc kubenswrapper[4792]: I0216 23:14:25.028296 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:14:25 crc kubenswrapper[4792]: E0216 23:14:25.032435 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:14:29 crc kubenswrapper[4792]: E0216 23:14:29.029040 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:14:38 crc kubenswrapper[4792]: E0216 23:14:38.042485 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:14:39 crc kubenswrapper[4792]: I0216 23:14:39.027529 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:14:39 crc kubenswrapper[4792]: E0216 23:14:39.027906 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:14:41 crc kubenswrapper[4792]: E0216 23:14:41.029283 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:14:50 crc kubenswrapper[4792]: E0216 23:14:50.029232 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:14:52 crc kubenswrapper[4792]: I0216 23:14:52.029073 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:14:52 crc kubenswrapper[4792]: E0216 23:14:52.029474 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:14:52 crc kubenswrapper[4792]: E0216 23:14:52.030589 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.177678 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg"] Feb 16 23:15:00 crc kubenswrapper[4792]: E0216 23:15:00.178586 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="extract-content" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.178613 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="extract-content" Feb 16 23:15:00 crc kubenswrapper[4792]: E0216 23:15:00.178642 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.178648 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" Feb 16 23:15:00 crc kubenswrapper[4792]: E0216 23:15:00.178678 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="extract-utilities" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.178684 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="extract-utilities" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.178942 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c02d6ea-22e8-43ef-ad77-a07fd32d96c2" containerName="registry-server" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.179879 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.188330 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.188522 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.189592 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg"] Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.366053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.366111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.366334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9gz\" (UniqueName: \"kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.467843 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9gz\" (UniqueName: \"kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.468037 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.468069 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.469114 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.479664 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.486967 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9gz\" (UniqueName: \"kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz\") pod \"collect-profiles-29521395-jzhsg\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:00 crc kubenswrapper[4792]: I0216 23:15:00.511236 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:01 crc kubenswrapper[4792]: I0216 23:15:01.021479 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg"] Feb 16 23:15:02 crc kubenswrapper[4792]: I0216 23:15:02.465389 4792 generic.go:334] "Generic (PLEG): container finished" podID="7a7d6743-886b-49dc-addb-316ac13a7e49" containerID="da8d4a1b2403c9bd7ecc4485d28b6772ce97b6d89ab023b653c35f79506216e6" exitCode=0 Feb 16 23:15:02 crc kubenswrapper[4792]: I0216 23:15:02.465967 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" event={"ID":"7a7d6743-886b-49dc-addb-316ac13a7e49","Type":"ContainerDied","Data":"da8d4a1b2403c9bd7ecc4485d28b6772ce97b6d89ab023b653c35f79506216e6"} Feb 16 23:15:02 crc kubenswrapper[4792]: I0216 23:15:02.465998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" event={"ID":"7a7d6743-886b-49dc-addb-316ac13a7e49","Type":"ContainerStarted","Data":"a891490620298f3be6b4d7f9196e2d56912aea6646a6fdb58ed2dc48d836b6f5"} Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.895153 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.964556 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume\") pod \"7a7d6743-886b-49dc-addb-316ac13a7e49\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.965333 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a7d6743-886b-49dc-addb-316ac13a7e49" (UID: "7a7d6743-886b-49dc-addb-316ac13a7e49"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.966049 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume\") pod \"7a7d6743-886b-49dc-addb-316ac13a7e49\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.966322 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms9gz\" (UniqueName: \"kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz\") pod \"7a7d6743-886b-49dc-addb-316ac13a7e49\" (UID: \"7a7d6743-886b-49dc-addb-316ac13a7e49\") " Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.968352 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7d6743-886b-49dc-addb-316ac13a7e49-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.972358 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a7d6743-886b-49dc-addb-316ac13a7e49" (UID: "7a7d6743-886b-49dc-addb-316ac13a7e49"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:15:03 crc kubenswrapper[4792]: I0216 23:15:03.982790 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz" (OuterVolumeSpecName: "kube-api-access-ms9gz") pod "7a7d6743-886b-49dc-addb-316ac13a7e49" (UID: "7a7d6743-886b-49dc-addb-316ac13a7e49"). InnerVolumeSpecName "kube-api-access-ms9gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:15:04 crc kubenswrapper[4792]: E0216 23:15:04.028736 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-jndsb" podUID="c7d886e6-27ad-48f2-a820-76ae43892a4f" Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.069923 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7d6743-886b-49dc-addb-316ac13a7e49-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.069951 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms9gz\" (UniqueName: \"kubernetes.io/projected/7a7d6743-886b-49dc-addb-316ac13a7e49-kube-api-access-ms9gz\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.494201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" event={"ID":"7a7d6743-886b-49dc-addb-316ac13a7e49","Type":"ContainerDied","Data":"a891490620298f3be6b4d7f9196e2d56912aea6646a6fdb58ed2dc48d836b6f5"} Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.494420 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a891490620298f3be6b4d7f9196e2d56912aea6646a6fdb58ed2dc48d836b6f5" Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.494244 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-jzhsg" Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.978812 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk"] Feb 16 23:15:04 crc kubenswrapper[4792]: I0216 23:15:04.992888 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521350-2qsxk"] Feb 16 23:15:06 crc kubenswrapper[4792]: I0216 23:15:06.028371 4792 scope.go:117] "RemoveContainer" containerID="697264496ba87726535953bbb4f54a7ff0fb593c656d8e279926a29a04d34fbe" Feb 16 23:15:06 crc kubenswrapper[4792]: E0216 23:15:06.029055 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-szmc4_openshift-machine-config-operator(5f759c59-befa-4d12-ab4b-c4e579fba2bd)\"" pod="openshift-machine-config-operator/machine-config-daemon-szmc4" podUID="5f759c59-befa-4d12-ab4b-c4e579fba2bd" Feb 16 23:15:06 crc kubenswrapper[4792]: E0216 23:15:06.031881 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e58723ee-d9c2-4b71-b072-3cf7b2a26c12" Feb 16 23:15:06 crc kubenswrapper[4792]: I0216 23:15:06.045637 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b068db64-d873-4f93-b01a-7775abe02348" path="/var/lib/kubelet/pods/b068db64-d873-4f93-b01a-7775abe02348/volumes"